Is it segmenting maxillofacial ct as well?
Met vriendelijke groeten,
T.H. van den Berg, Parodontoloog NVvP, Implantoloog NVOI
Zijlweg 144
2015 BH Haarlem
+31 23 542 01 88
Is it segmenting maxillofacial ct as well?
Met vriendelijke groeten,
T.H. van den Berg, Parodontoloog NVvP, Implantoloog NVOI
Zijlweg 144
2015 BH Haarlem
+31 23 542 01 88
Please contact the developers on the TotalSegmentator Github. You may add an issue with sample data there.
If you have segmented sample data then contacting TotalSegmentator developers would make sense. They may be able to train a network based on your data and make it openly available.
If you are looking for openly available automated segmentation tools for CMF images then you might find these Slicer extensions useful:
Hi @Tijl,
Great question. Have you considered using MONAI Label for this?
Here is a project we’ve created with MONAI Label that may be of interest: NA-MIC Project Weeks | Website for NA-MIC Project Weeks
@dgmato can also comment on this
@diazandr3s need to figure out a way to run MONAILabel (at least inference) by a single click in Slicer. Many users would never even consider running commands in a terminal or install docker. For all these people MONAILabel is inaccessible now. The same way as nnunet can be installed in Slicer’s Python environment, we could install MONAI and its dependencies in the Slicer module; then select a model and download it and start a server - all using the GUI, by a few clicks. What do you think? Do you know about any plans in this direction?
It is not quite what you are aiming for @lassoan but we have successfully used MONAI bundle to integrate models from MONAI model ZOO during the last PW in Montreal.
This is a good idea, @lassoan.
Having MONAI Label and the Bundle models available for inference only should be possible.
We could use a modified version of the Slicer’s MONAI Label module to run the inference commands in the background.
As @rbumm suggested, Bundles and the MONAI Label models can be easily run with a single command.
For the full experience (inference, training and active learning), users can be directed to the instructions for starting the MONAI Label server.
What do you think, @lassoan? We could start with the models for single modality
It all sounds good.
Could you provide a complete list of steps that users need to do manually now for a specific model? I would check what would it take to perform those steps automatically, in Slicer’s Python environment.
Sure!
The steps are the following:
- Create a Python env and install MONAI Label
pip install monailabel- Download the apps: radiology and/or monaibundle
monailabel apps --download --name monaibundle --output ./- Within the app folder, execute the main Python file specifying the model and folder where the image(s) are located
Example command for the radiology app:
python main.py -s /tmp/MONAILabelTest/sampleTest/ --model segmentation --test infer- show the predictions saved in test_labels folder
Here I created two videos showing how this can be done. I assumed the env with MONAI Label is already created.
For the radiology app:
For the monaibundle app:
In addition to the typical MONAI Label models (deepedit, segmentation, vertebra), users can also run the following models from the Model Zoo:
spleen_ct_segmentation
pancreas_ct_dints_segmentation
spleen_deepedit_annotation
swin_unetr_btcv_segmentation
renalStructures_UNEST_segmentation
wholeBrainSeg_Large_UNEST_segmentation
prostate_mri_anatomy
lung_nodule_ct_detection
wholeBody_ct_segmentation
Please let me know your thoughts. Happy to explain more about any of the steps presented here in the videos.