Could you please provide more specific information on the differences between Deepedit and Segmentation models?I am not very clear about the differences between these two in practical application.Thanks
ChatGPT´s response to your question is:
MonaiLabel, an open-source tool focused on medical imaging AI, provides various functionalities including annotation and AI-assisted segmentation. Two key models in MonaiLabel are DeepEdit and Segmentation, and understanding their differences is crucial for their effective application.
-
DeepEdit Models:
- Purpose: DeepEdit is designed for interactive annotation and editing. It’s particularly useful in scenarios where manual annotation is challenging or time-consuming.
- Functionality: It combines the strengths of AI models and human expertise. Users can provide sparse annotations (like a few clicks), and the model iteratively refines the segmentation based on this input.
- Use Cases: DeepEdit is ideal for complex cases where detailed, precise annotations are required, and where a fully automated approach might not be sufficient. It’s often used in scenarios where the anatomy or pathology varies significantly between patients.
-
Segmentation Models:
- Purpose: These models are tailored for automated segmentation tasks. They are designed to work with minimal to no human intervention.
- Functionality: Once trained, segmentation models can automatically segment images based on the learned patterns from the training dataset. They don’t require or usually allow for interactive user input during segmentation.
- Use Cases: Segmentation models are suitable for tasks with well-defined patterns and where the variance between cases is relatively low. They are used in high-throughput environments where manual annotation is impractical.
In practical application, the choice between DeepEdit and Segmentation models depends on the specific requirements of your task. If your work requires detailed, user-guided annotation in complex or highly variable images, DeepEdit is the better choice. On the other hand, if you are dealing with a large volume of images with consistent patterns, a Segmentation model would be more efficient and effective.
Remember that the success of both models heavily depends on the quality and representativeness of the training data. For specialized medical imaging tasks, it’s crucial to have a well-curated dataset that reflects the diversity and complexity of the cases you intend to handle.
Hi I’m running to a strange problem with my monai label on ubuntu. After successfully pip installing it, whatever command i pass to monailabel in command line I get the same response |
Using PYTHONPATH=/home/xxx::/home/xxx/yy/packages_zzzl:/home/xxx/yyy/packages_zzz
I’ve tried installing from git repo but still getting the same response. Can someone help with this please ?
I figured it … needed to set python alias pointing it to python3
This is amazing! Thanks for sharing this, @rbumm
Thank you for your professional explanation.This will be of great help to me.
Now I am using MONIAILabel to train a segmentation model for the 1-5 segments of the lumbar spine, which removes the bone cortex.
However, after using Segmentation, the training results were not satisfactory. Later, I switched to Deepedit for training from scratch. When the training accuracy was 80%, I noticed that my dataset’s train loss was around 0.0010, and the accuracy no longer improved, And I don’t know how to use smartEdit and Scribbles for interactive annotation and editing.
The comparison between manually annotated images for training and images automatically segmented by models trained through Deepedit is as follows.
It can be seen that the automatically segmented images have a lot of jagged edges and are not very accurate. I would like to ask if the segmentation model can achieve the level of my own manual segmentation, how to improve segmentation quality, and how to solve jagged edges.
How many training datasets did you use to achieve this result?
There are only four, but I have increased some by copying and rotating the angle, but I am not sure if it is effective. May I ask if it is necessary to have at least 20 different training datasets like the sample datasets.
With that such low number of samples, that’s probably what you can expect. Also, I believe MonaiLabel does automatic augmentation (random rotations, and other transformations of the training data), so you don’t have to manually do it.
Is there a reason you are not using TotalSegmentator to do this?
Thank you very much for your answer.
I will increase the amount of data in my training datasets in the future.
The TotalSegmentator can automatically and high-quality divide many organs, but I need to segment the lumbar spine L1-5 in a cylindrical shape like the one in the picture, without including the cortical bone. My understanding of the Total Segmenter is that it cannot achieve the desired effect. Perhaps I should try the segmentation effect.
Please note that when using the deepedit model, your entire volume will be resampled to the target spatial_size
(e.g. 128x128x128 by default). You would need to change that in the configuration in this line.
By the way, even though ChatGPT’s answer is helpful, it omits an important disctinction between the deepedit and segmentation model In the deepedit model, the target_spacing
parameter is ignored, the entire volume is resampled to a voxel grid of spatial_size
, which can lead to weird anisotropy issues if the volume is much larger in one of the spatial directions.
To account for that, e.g. as in your case of torso CT scans, I would probably try sth like 128x128x192, to account for the larger FOV in axial direction. Of course, you will need much more VRAM on your GPU to train the model.
Alternatively, to avoid staircase artifacts, you can use the segmentation model, and specify both spatial_size
and a target resolution, e.g. 1x1x1mm. Hope that helps
(Side note: I agree with @muratmaga - 4 volumes is probably not sufficient to train this problem)
Thank you for making me realize what you said.
This has given me a deeper understanding of the difference between Deepedit and Segmentation.
It also taught me the meaning and purpose of the parameters target_spacing and spatial_size, even though they were originally set to 1.0 * 1.0 * 1.0 and 128 * 128 * 128.
Next, I will try to increase the number of volumes and try Segmentation and Deepedit separately, and modify some of their parameters, such as changing spatial_size to 128x128x192 or 256256384. Then study the effects achieved by both of them, hoping to improve the jagged image.
Another small issue is that I found that there seems to be no parameter spatial_size in the Segmentation model, but a parameter called roi_size. This should be the difference between the two you mentioned, right?
Thank you for mentioning this distinction between the settings for deepedit and the segmentation model. I have a question - if I work with scans that are roughly the same volume (but slightly different), is that a problem? I assume the “spatial size” parameter in the segmentation model sets it for the whole model (rather than a scan per scan basis).
I am planning on using MonaiLabel for a future project to train segmentation of CT images, and the cropping of the images might vary slightly.
Is it necessary to crop all training volumes to the same size?
Best,
Eva