Segment Skeletal thorax (Ribs, Thoracic spine and sternum)


I am trying to use slicer to isolate the skeletal thorax from DICOMS. I am working with threshold, islands and scissors tool but I am not able to isolate the bones cleanly. Soft tissue, organs, scan bed shows up in the segmented image. Plus same threshold range captures different stuff for different DICOMS. I have two questions:

  1. Is there a way to cleanly segment the bones?
  2. I have over 600 DICOMS from which I need to isolate the skeletal thorax (bones). How can I do this using Python script?


Hi Vik,

My personal preference for segmentation of thoracic skeletal bones is using the new “local threshold” function of the “Segment Editor”, but this requires a few mouseclicks for each dataset. Another option would be the involvement of “Grow from Seeds” as described in this thread: Bone segmentation to create 3D-printable STL

Once you have defined a good workflow you can try to integrate it into a python script that loads the DICOM dataset, then lets you do the manually assisted segmentation, automatically saves the results, and switches to the next patient.


PS for detailed and almost instant overviews of the thoracic bone structures you can use “Volume Rendering” and the “CT-AAA” preset.

Thanks Rudolf. I have the latest version 4.11 but I do not see any “local threshold” function. Is it in v5.0 and is there any tutorial on it?

Thanks Rudolf. If I use the “CT-AAA” preset, can I save the displayed parts directly. Also in “CT-AAA” preset, scan bed shows up with some other noise. So it is still not a clean bone segmentation. Annotation 2021-03-19 102944|400x347 . Any suggestion is highly appreciated. Thanks again.

You will find it after installing the “Segment Editor Extra Effects” extension from the extensions manager.

More information on the effect see here: New Segment Editor effect: Local threshold - #7 by sfglio

It is important to remember that - after selection of a “bone” threshold - you need to left click into one of the slices (onto the bone) to add to the new segmentation.

CT-AAA: Adjust the “shift” slider a bit right or left to remove or include unwanted areas


Please remember, that you are working with volume rendering here, you will not get any kind of 3D printable segment or segment statistics, it is is just a quick visual 3D representation of the volume.

Thanks Rudolf. I was looking at the “CT-AAA” preset and if I move the shift parameter it does help a lot but these lines from the scan bed show up. Not sure how to eliminate those. Also can I save only the displayed parts from this “CT-AAA” preset to a new file (DICOM, NIFTI etc.)

Sure, enable Crop and check ROI as shown here:


then you can crop away the table artefact by moving the ROI box markers around

Thanks. Is there a way to save this rendered volume only?

As no “real” 3D segmentation is generated in volume rendering, you would just save (“File” → “Save”) the scene into a directory of your choice.

Calling the “*.mrml” file later will reopen your volume rendering exactly as you have left it.

Also have a look at saving from 3D Slicer into a “*.mrb” bundle - this will save the complete data set into one single file.

1 Like

Hello Rudolf,

I saved the file as a *.mrb file. Is there a way to extract the 3D array of the CT-AAA preset. I want to use the array for machine learning. Thanks

That sounds very interesting.

A quick google search revealed this thread:

which may get you started …
What is the goal of your trial in the end, what do you need to see?

Best regards

The CT-AAA preset is pretty close to a straight threshold. You can see the transfer function in the Volume Rendering module, under Advanced…

The effective threshold is where Point 2 is in the transfer function. Note that this slides back and forth with the “Shift” slider, so you want to identify the value of Point 2 when you like how the volume rendering looks.

If you want a volume output, you can generate a thresholded volume easily in a couple different ways (see Documentation/Nightly/ScriptRepository - Slicer Wiki for an example). However, if this is good enough training data for your ML project, then essentially all you are doing is teaching it to apply a binary threshold, which you hardly need machine learning for…

If you need to do something more complex, then you basically need to figure out a workflow that works well on some of your example images, and if you are lucky and it requires no manual decisions on a per-image basis, then you can fully automate it and try it on the rest of your 600 images. If it still requires user judgement on every image, you can still create a facilitated workflow which will speed your processing of the remaining images, but that’s the best you can do.

I am developing computational Human models and need to come up with an average thorax geometry for males and females for injury analysis.

Also in the link you sent, it does not say how to store the CT-AAA preset as a 3D array, which can be opened with Numpy and not Slicer. Is that possible? Thanks

import numpy as np
imageVolumeNode = getNode('ThoraxImage') # replace with the name of your volume
thresholdValue = 350 # whatever threshold value you want
imageNumpyArray = slicer.util.arrayFromVolume(imageVolumeNode)
threshIm = imageNumpyArray > thresholdValue
with open('numpyOutput.npy', 'wb') as file_handle:, threshIm)

This will threshold your entire volume at 350 HU, and save the output logical numpy array to “numpyOutput.npy” (which can be loaded into numpy using numpy.load()). Note that this will not crop out the table, as you might have in the volume rendering. To do that, you can use the CropVolume module using the same ROI as you used in the volume rendering module. Then, in the code snippet above, use the name of the cropped volume node rather than the original volume (or you can just have the cropped volume replace the original volume by selecting it as the output volume in “CropVolume” instead of “Create new volume”.

Thanks Mike. This is really helpful.

1 Like