Automate tasks in Python

Is it actually possible to automate all the tasks that can be done through the GUI in Python? I have spent quite some hours trying to do very simple things but I was unsuccessful.

More specifically, I want to use the module “Model to Label Map”. So far, I could:

import slicer
volume = slicer.util.loadVolume(path1)
model = slicer.util.loadModel(path2)
slicer.util.selectModule(slicer.modules.modeltolabelmap)

The next step is to change the settings or parameters of this particular module; then click on “Apply”, and once it’s done, saving the output somewhere. Any idea how I can do these steps?

Thanks.

Yes, everything that you do on the GUI can be done using Python scripting, too.

Instead of “Model to Label Map” we generally recommend to use Segmentations module to do all conversions between segmentations, models, and labelmap volumes. You can import a model into a segmentation (that can be edited using Segment Editor effects and can be exported to labelmap) using a single command, as shown here: https://gist.github.com/lassoan/2d5a5b73645f65a5eb6f8d5f97abf31b#file-segmentgrowcutsimple-py-L34

You can find lots of additional examples here: https://www.slicer.org/wiki/Documentation/Nightly/ScriptRepository#Segmentations

Hello Andras, thank you very much for your quick answer. Some comments/follow-up questions:

  1. What is exactly a “segmentation node”? Is it a voxelized version of the “model node” (the mesh), because getting this is my ultimate goal.

  2. I don’t find very clear how to use the function to convert a “model node” to a “segmentation node”. It appears to be one line but it requires the “polydata” that it’s more unclear how to generate. In that example, it is iterating over certain pre-defined positions (line 24), creating some “SphereSource” objects and appending them to the initially-empty polydata. How would I find these “pre-defined positions”. I’m not sure what this is, and whether I have do the same in my case. Isn’t it another way to get a “segmentation node”? Is there another way around to use the Model To LabelMap functionality?

  3. I couldn’t find any function to do the “Model To LabelMap” with the parameters/settings that I find in the GUI. More specifically, the GUI shows the settings “label value”.

  4. Related to 3), I wanted to ask about another parameter that does not appear in “settings” in the latest version of Slicer. This parameter is “sampling space”, which appears in version 4.3 and I found it crucial in my data. Without this parameter I get empty volumes whereas with this parameter (and an appropriate tuning) I get favorable results.

Thank you very much again for your answers.

See detailed explanation here: https://slicer.readthedocs.io/en/latest/user_guide/image_segmentation.html

Have a look at examples in the script repository, for example this: https://www.slicer.org/wiki/Documentation/Nightly/ScriptRepository#Rasterize_a_model_and_save_it_to_a_series_of_image_files

See the example above how to specify output image geometry (origin, spacing, axis directions). Other settings can be controlled by adjusting conversion parameters, see fore example here: https://www.slicer.org/wiki/Documentation/Nightly/ScriptRepository#Re-convert_using_a_modified_conversion_parameter

Hi again Andras, I really appreciate your help.

I’m getting closer to the desired solution. The snippet below seems to load the image and the model converted into a labelmap.

# Load the Nifti file, the image
referenceVolumeNode = slicer.util.loadVolume(niftiImage_path)
# Load the mesh
inputModel = slicer.util.loadModel(mesh_path)

seg = slicer.mrmlScene.AddNewNodeByClass('vtkMRMLSegmentationNode')
seg.SetReferenceImageGeometryParameterFromVolumeNode(referenceVolumeNode)
slicer.modules.segmentations.logic().ImportModelToSegmentationNode(inputModel, seg)

However, in Slice view, the loaded segmentation and the image are not aligned. This is crucial to later save the segmentation. This surprises me because I would expect that the function SetReferenceImageGeometryParameterFromVolumeNode would put the “seg” in the same orientation and coordinates as the referenceVolumeNode (the image). As you can see below, the bottom-right corner of the image is closer to the top-left corner of the segmentaiton.

pic1

Is there a way to align these two automatically? I could see that I can “move” the image by changing the “image origin”, but this is done visually and I can’t just do this. If there is no other way, how can I change this “image origin” from python?

Thanks.

This is probably an RAS/LPS coordinate system mismatch. See more information and options to resolve it here: https://www.slicer.org/wiki/Documentation/Nightly/Developers/Tutorials/MigrationGuide/Slicer#Slicer_5.0:_Models_are_saved_in_LPS_coordinate_system_by_default

That was indeed the problem.

Once again, thank you very much Andras :slightly_smiling_face: