I am trying to use slicer to isolate the skeletal thorax from DICOMS. I am working with threshold, islands and scissors tool but I am not able to isolate the bones cleanly. Soft tissue, organs, scan bed shows up in the segmented image. Plus same threshold range captures different stuff for different DICOMS. I have two questions:
Is there a way to cleanly segment the bones?
I have over 600 DICOMS from which I need to isolate the skeletal thorax (bones). How can I do this using Python script?
My personal preference for segmentation of thoracic skeletal bones is using the new “local threshold” function of the “Segment Editor”, but this requires a few mouseclicks for each dataset. Another option would be the involvement of “Grow from Seeds” as described in this thread: Bone segmentation to create 3D-printable STL
Once you have defined a good workflow you can try to integrate it into a python script that loads the DICOM dataset, then lets you do the manually assisted segmentation, automatically saves the results, and switches to the next patient.
Thanks Rudolf. If I use the “CT-AAA” preset, can I save the displayed parts directly. Also in “CT-AAA” preset, scan bed shows up with some other noise. So it is still not a clean bone segmentation. Annotation 2021-03-19 102944|400x347 . Any suggestion is highly appreciated. Thanks again.
It is important to remember that - after selection of a “bone” threshold - you need to left click into one of the slices (onto the bone) to add to the new segmentation.
CT-AAA: Adjust the “shift” slider a bit right or left to remove or include unwanted areas
Please remember, that you are working with volume rendering here, you will not get any kind of 3D printable segment or segment statistics, it is is just a quick visual 3D representation of the volume.
Thanks Rudolf. I was looking at the “CT-AAA” preset and if I move the shift parameter it does help a lot but these lines from the scan bed show up. Not sure how to eliminate those. Also can I save only the displayed parts from this “CT-AAA” preset to a new file (DICOM, NIFTI etc.)
As no “real” 3D segmentation is generated in volume rendering, you would just save (“File” → “Save”) the scene into a directory of your choice.
The effective threshold is where Point 2 is in the transfer function. Note that this slides back and forth with the “Shift” slider, so you want to identify the value of Point 2 when you like how the volume rendering looks.
If you want a volume output, you can generate a thresholded volume easily in a couple different ways (see Documentation/Nightly/ScriptRepository - Slicer Wiki for an example). However, if this is good enough training data for your ML project, then essentially all you are doing is teaching it to apply a binary threshold, which you hardly need machine learning for…
If you need to do something more complex, then you basically need to figure out a workflow that works well on some of your example images, and if you are lucky and it requires no manual decisions on a per-image basis, then you can fully automate it and try it on the rest of your 600 images. If it still requires user judgement on every image, you can still create a facilitated workflow which will speed your processing of the remaining images, but that’s the best you can do.
I am developing computational Human models and need to come up with an average thorax geometry for males and females for injury analysis.
Also in the link you sent, it does not say how to store the CT-AAA preset as a 3D array, which can be opened with Numpy and not Slicer. Is that possible? Thanks
import numpy as np
imageVolumeNode = getNode('ThoraxImage') # replace with the name of your volume
thresholdValue = 350 # whatever threshold value you want
imageNumpyArray = slicer.util.arrayFromVolume(imageVolumeNode)
threshIm = imageNumpyArray > thresholdValue
with open('numpyOutput.npy', 'wb') as file_handle:
np.save(file_handle, threshIm)
This will threshold your entire volume at 350 HU, and save the output logical numpy array to “numpyOutput.npy” (which can be loaded into numpy using numpy.load()). Note that this will not crop out the table, as you might have in the volume rendering. To do that, you can use the CropVolume module using the same ROI as you used in the volume rendering module. Then, in the code snippet above, use the name of the cropped volume node rather than the original volume (or you can just have the cropped volume replace the original volume by selecting it as the output volume in “CropVolume” instead of “Create new volume”.