I am wondering if there is a functionality in 3d slicer where I can move the segmentation in terms of its location? I am basically creating cube segmentation of different cartridges in a phantom. I basically need the segmentations to be the same size. Cloning the segmentation simply puts the segmentation in the exact same location as the parent. However, I want to move the cloned segmentation so that I can create exact segmentation for other location as well.
After the cloning, you will have a separate node, and you can put it under transform and use translation sliders to move wherever you want.
If your cartridges are of simple shapes (cube, sphere, cylinder etc) you can create models of exact dimension for each of them (SlicerIGT extension, CreateModels), place them wherever you want and then import them as segmentation.
You can do all this in the Data module, by right-clicking on the transforms column:
Transform is rotated by left-click-and-drag, and translated by Shift+left-click-and-drag.
“Harden transform” in the end if for permanently applying the transformation to the selected node, which is probably not needed for what the use case described above.
Apart from rotate, translate and scale is there a way to enable “interaction handles” or move it in slice view (other than using the transform module). Idea is to restrict the translations/rotations to one axis to have more control and precision?
This is still the stock VTK box widget, which is very limited. @Sunderlandkyl is working on adding a markups ROI widget and the next one will be the transform widget, probably in about 6 months. Until then, modules can use plane widgets to specify position and orientation interactively (keep copying the plane transform into a transform node).
Dear Iassoan and other contributors to this discussion,
Thank you for the very helpful posts!
Can I ask for a clarification?
When I used the method above, I could manually transpose and fit segmentation masks to my data.
However, when I save the segmentation as Nifti and load it on different software (e.g. ITK snap), the segmentation remains as previously.
Also, there is an indication of differences in dimensions etc.
Could you please describe the way to save our results?
I assumed that when I “harden” the transform it is applied on the images. Then I created labels from the initial segmentation file and saved. I did this because trying to save the corrected segmentation from the data list on Slicer, did not give me the option to save as nifti.
I suppose I am doing something very basic, wrong. Apologies :).
I don’t think ITK-snap can overlay segmentation if it has different geometry (origin, spacing, axis directions, or extents) than the main image.
You can either use more capable software (such as Slicer) that does not have such limitation; or you can resample the segmentation to match the geometry of the main image. Slicer can do this resampling, see instructions here, but since resampling is a lossy operation, do it sparingly.
Nifti is an unnecessarily complicated yet limited file format. I would recommend to keep images and segmentations in nrrd format instead.
Thank you for your immediate and helpful reply!
I use Slicer and would prefer the suggested file format, however my second step is to use ML feature extraction on images. The pipelines and software i used require NiFTi files and when i tried resampling the images were not accepted. I will try the steps again, any other suggestions would be much appreciated.
It’s worth noting that segmentations were created on T1/T2/Flair/T1gd as usually. I essentially need to apply the labels on acquired perfusion maps (part of the brain volume, one 3D sequence). If I could appropriately coregister these in the common space I guessed segmentations would work. Unfortunately they don’t, despite image dimensions being the same to other sequences after registration.
Thanks again for your help!
In Segmentations module’s Export section, you need to select the image that you want to mask as “Reference image”.
Nifti is a really bad format for general-purpose medical image computing, but for neuroimaging its usage is justifiable, as many neuroimaging software support it.
This was very useful for a new measurement my lab wants to track. Is there a way to make this operative across 1000 individual models? Essentially, I need my 3D model, not a segmentation, to be within the planes in Slicer.
Would there be a simple way to make the planes automatically center to the bounding box of my rib or move the rib to what looks like the origin? If not that’s fine, this is much quicker than doing a crude transform to get it to align.
I am not sure about the ask. if you are asking whether you can put models under transform in the same way as segmentation, yeah sure you can do it. Most object can be put under transform.