I was able to run MONAI Label on server and connect through slicer, load images that I added to server (I haven’t tried adding segmentations to server though).
I have a dataset that I have used for training nnUNet as it is described in their github page.
Now I wanted to use that dataset with MONAI Label. I have a few questions:
1- is it possible to use my nnUNet trained model as a basis for what I can improve interactively in MONAI Label? (I can imagine that is not straightforward, if possible. although MONAI dynunet is similar to nnUNet but still a lot differences) I see an issue in github of MONAI Label, but seems not resolved.
2-If I want to add my dataset (images, binary labels all .nii.gz the way nnUNet digests), to MONAI Label server, is there a tutorial helping out in there? I believe in that way even (let’s say I use deepedit) I will need to change self.labels dictionary somewhere, but I couldn’t find this in deepedit main.py.
3-If it’s not possible to add my already done segmentation to server to help it start training from above zero, what should I change in the model?
There are several threads concerning this topic here in Slicer discourse, maybe even this recent post of @diazandr3s can get you started. We have been using the ML segmentation model for a while in similar approaches.
You should then have downloaded a demo dataset and in this dataset, you will find directories that you could duplicate for your own project and where you could put your pre-analyzed data. Another, maybe safer way would be to load each image and corresponding label into 3D Slicer first and upload each case to the MonaiLabel Server by the “Submit label” mechanism of the extension, then train.
Thanks for the ping @rbumm
This is a very good question @S_Arbabi. At the moment, it is not straightforward to use nnUNET models in MONAI Label. However, for any model you have trained using nnUNET you could do the same using MONAI Label.
As an example of this, we’ve trained a whole-body CT segmentation model using the Total Segmentator dataset: model-zoo/models/wholeBody_ct_segmentation at dev · Project-MONAI/model-zoo · GitHub
Here is the video where I briefly talk about the technical details of this implementation:
I used a single SegResNet network and get comparable results. If needed, you could also implement multiple networks and do ensembling as done in the nnUNET framework.
Here you can see how the dataset should be prepared to use MONAI Label: MONAI Label Workshop - Project Week 38 - YouTube
I’d recommend you start with the segmentation model available on the latest MONAI Label version: MONAILabel/sample-apps/radiology at main · Project-MONAI/MONAILabel · GitHub
Update these label names according to your task:MONAILabel/segmentation.py at main · Project-MONAI/MONAILabel · GitHub
and put this to false so you start from scratch the training process: MONAILabel/segmentation.py at main · Project-MONAI/MONAILabel · GitHub
Hope this helps,
with the implementation of monai.apps.nnunet. nnunetv2_runner I believe now it should be easy(ier) to integrate nnunet into monailabel.
Any ideas for a faster start? @diazandr3s
Very good question!
The possibility of training the nnUNet in MONAI is indeed a great benefit for the community.
However, there is still work to do to use the nnUNet in MONAI Label. As you may know, nnUNet is not a network architecture, but rather a semantic segmentation method that automatically adapts to a given dataset. It involves multiple algorithms such as data fingerprints, ensembling using different network configurations, etc.
The integration of this method in MONAI Label isn’t a straightforward task.
If you don’t mind, please open an issue or discussion in the MONAI repo (Issues · Project-MONAI/MONAI · GitHub) so others can also comment.
This might sound like a basic questions thats’ been answered before but I’m kinda stuck.
I want to train a model to detect epicardial adipose tissue (EAT) on CCTA (I can’t seem to find anything reliable at the model … happy to be corrected if anyone can save me from spending my Christmas break segmenting )
I think the best course of action is to use the pre-trained wholeBody_ct_segmentation model along with deepEdit to get the cardiac structures, and then manually segment EAT and gradually train my model after a few hundred cases.
Does this sound like a reasonable approach (consider the complexities of EAT) and if so, how do I add EAT as a new label in the model if I’m using a retrained model ?
Thanks for your question, which is important and relevant.
What I could contribute that you can use deepedit or segmentation model and do not need to label a few hundred cases before you train. You can start training after about 20, even aftler maybe five lableled cases, and then use MONAILabel to infer the “next dataset” that you load. I have exclusive experience with the “segmentation” technique of MONAILabel and good experiences. But you find more information of deepedit vs. segmentation in the ChatGPT conversation here.
Good question, @sfat.
Are you still working on this task? Would you be able to share a segmented case? I’d like to see an example so I can comment on more details of the best model/approach to use.
Happy to meet and further discuss an approach.