Is it possible to create segmentation from given coordinate information (x,y,z)?

Is it possible to segment the x, y, z points that I entered manually? I found that polygons or cubes are drawn with python using vtk, but what I want is to segment the coordinate points I have given by painting. For example, when I enter the point (170,100,150), this point is automatically segmented.

Do you mean you would want to set each voxel in the segmentation using a method call? That would mean hundreds of thousands, maybe millions of calls for a single segment. That could take dozens of seconds or more.

Python is generally not meant to be used for voxel-bygvixel processing. Can you work with higher-level objects than voxels?

What is your overall goal, clinical problem to solve?

thanks for answer,
Actually I want to make this one. I want to turn the coordinates I entered into a segmentation mask on the image.

You can convert your list of coordinates to a numpy array (see for example here) and then import that numpy array into a segmentation.

However, this whole idea of representing binary labelmaps as list of coordinate values is much more complicated than needed and extremely inefficient. What software generates point list positions instead of a labelmap image? Does that software have an option to save a binary image instead? If not, then suggest to the developer to implement saving of the result into a numpy array and write that array to a standard 3D image file using pynrrd. You can load the nrrd file into Slicer directly as a segmentation.