segmentation resolution created from Jupyter notebook is different than that created in 3dslicer

Hi all,

I have been using 3dslicer for a while and wrote some Python codes for repetitive tasks.

When I create a segmentation in 3dslicer it would have the same resolution as the imported images (in my case 0.004x0.004x0.004) so when I threshold the segmentation and grow it using margin i can grow it by 0.004 which is 1 pixel. as shown here

I noticed that when I do the same using Python it says “Not feasible at current resolution” so i increased it to see what is the minimum i can grow and apparently is 0.02 meaning that the resolution is 0.02x0.02x0.02 for 1x1x1 pixel. as shown here

can someone help me get a 0.004x0.004x0.004 resolution using python?

This is my code:

Create segmentation

segmentationNode = slicer.mrmlScene.AddNewNodeByClass(“vtkMRMLSegmentationNode”)
segmentationNode.CreateDefaultDisplayNodes() # only needed for display
segmentationNode.SetName(‘Segmentation’)

Import the model into the segmentation node

slicer.modules.segmentations.logic().ImportModelToSegmentationNode(cylinderNode, segmentationNode)

Thanks in advance!

Please note that when I am using Jupyter I am importing a volume model that I created and then thresholding the CT images inside that model and I think this is where things are getting weird.
could it be that the volume has a low resolution? there is no “spacing” resolution for the model though. So, how can i increase the resolution of that model if that is the problem?