How to start with monailabel for new models

We would like to use monaiLabel to generate/refine existing segmentation of fetal mice. Given that there is no pre-existing model to do that, how does one go about this. We have about 20 or so already segmented scans.

At some point monaiLabel was very frequently updated. Due to the environment we work, frequent version changes and code updates is not feasible for us. Is it stable enough to use in a production environment on a regular basis?


I haven’t tried it all myself, but here are recent materials about MONAILabel with Slicer for segmentation. We are arranging to hold a workshop on this material on January 12, 2022, 2-4pm EST as part of project week where at least a few projects will try to apply it.


@pieper Steve is that date fixed? Not one week later because PW is from Jan 17th ? Thanks

1 Like

Yes, we scheduled it the week before Project Week on purpose so that we could block out 2 hours for it without cutting into other work. Also we wanted people to get up to speed in advance so that they could apply it to their own data during PW.


Thanks Steve. I managed to get the server running locally at some point, what I was actually for instructions on how to get started with a new model, as opposed to using a pretrained model (e.g., spleen segmentation). All the documentation I found so far refers to using existing models (e.g., Quickstart — MONAI Label 0.3.0 Documentation)

1 Like

Hi Murat! Good point, the existing documentation so far assumes a pretrained model. However, in my case this was not a problem - I used MONAILabel for inner ear structure segmentation in microCT (128^3 cubic volumes). I started with 0 annotated volumes, but nonetheless, I used the pre-trained left-atrium DeepEdit model (different anatomy AND modality), because 1) I wanted a quick start without much fiddling with the DeepEdit app code, and 2) I thought it might beneficial that the early encoder layers already have some “clue” about atomic 3D patch geometries. Not sure whether point 2) actually was beneficial (didn’t try the corollary), but to my delight, the DeepEdit’s UNet model quickly started snapping to the anatomy. Already after 2-3 manual annotations, I got surprisingly good segmentation guesses by the model. In your case, you have already 20 pre-annotated volumes. You can indicate this in the datastore.json file. Upon first start of DeepEdit, instead of annotating right-away, you could manually trigger the first pre-training. The resulting model might already be quite “valuable”. Let me know if I can help.


Hi @muratmaga,

There are two ways of using a MONAI Label App (i.e. DeepEdit, DeepGrow, Segmentation) without downloading/using the available pretrained models.

The first approach is adding the flag --conf use_pretrained_model false when starting the server. This means you should write something like this:

monailabel start_server -a /PATH_2_APP/ -s /PATH_2_IMAGES/ --conf use_pretrained_model false

The second approach and the secure one is by changing the flag directly in the main file of your App. So, if you want to use a DeepEdit App, change this line to false.

With regards to the already annotated images, there are two things to consider. First, images and corresponding labels should have the same name, and secondly, already annotated labels should be placed inside the subfolder /folder_images/labels/final.

Something like this

              - image_1
              - image_2
              - image_3
                             - image_1
                             - image_2

If this is not clear, in slide 9 of this presentation I did an example of how to organise the images and already annotated labels before starting the server.

Just out of curiosity, are the scans 3D? Which MONAI Label App are you planning to use (i.e. DeepEdit, DeepGrow, Segmentation)? How many segments have the labels? Is this a single label or multilabel task?

As @mangotee nicely commented, please don’t hesitate to ask, we’re happy to help :slight_smile:


Thanks @diazandr3s. Yes, images are 3D scans of fetal mice, and labels are multilabel (about ~30 labels). Volumes are about 200x400x400 voxels. Labels and volumes have identical prefix, but different formats. Volumes are in NRRD and labelmaps are in nii.gz (mostly because we want to keep the labelmaps compressed, but not necessarily the volumes). Is that an issue? Or do they have both same format?

Hi @diazandr3s. Thank you for interesting work and your contribution to the Slicer community.

I have a question regarding MONAI-Label: Let’s say I have a few dozens of CTs of both feet, I have segment of the talus bone for each foot so there are two segments (labels) per CT, and I have a frame that defines the orientation and position (i.e. a linear transform) of each talus.

One way to imagine the usefulness of this transform information is that if you apply the inverse of this transform to the talus-segment you would have a zero-centered talus with a normalized orientation for comparing measurements between taluses of different CTs. And after this you can also mirror left-and-right for left-taluses and after that all the taluses (right ones and left-mirrored ones) would be oriented so that the lateral side of the bone is on the right.

Would it be possible to feed MONAI-Label the transform information, so when the model is trained per input CT it returns a label for each talus and the transform defining its position and orientation?

1 Like

This is a nice use-case for MONAI Label :slight_smile:
Do you mean 30 segments in each label? Just for me to understand, which are the segments in each 3D scan?
If the task is multilabel like this one, then I recommend you to use the Multilabel DeepEdit App.
My suggestion is to not mix image/label formats. Is it difficult to have both images and labels in the same format?

Thanks, @mau_igna_06!
Yes, you should be able to use MONAI Label for this task. You can easily define transforms that can be applied to the images for training or performing inference.
I’m happy to show you how you can do this. For instance, I’m currently creating custom transforms for a multilabel task. In here I’m defining those transforms that I used for my App.


Yes, it is a multi-label task exactly like the one you show, only in mice. We keep the labels and volumes together for our current workflow, so keeping them in separate formats with the same file name is a nice trick. It has gives us the benefit of compressing the labels (which does really well). But it is not a big deal, I can switch them to same format.

I will give a try, thanks for your pointers.I am excited to try.

1 Like

actually I got stuck at the beginning currently. I am following these instructions

and at the monailabel --help step I am getting

maga@magalab-ML:~/.local/bin$ ./monailabel
Using PYTHONPATH=/home/maga:
Python 2.7.18
/usr/bin/python: No module named monailabel

It is not finding the python3. I tried setting PYTHONPATH=/usr/bin/python3 in the shell prior to executing monailabel, but that didn’t have any effect.

Any suggestions

The other issue I am having, it is not clear from this instruction set that whether the installation needs to be done as a priviledged user or it is possible to run it as normal user. I hardcoded the python3 path and now I can use the monailabel file, but when I do
monailabel apps --download --name deepedit --output apps

it tries to write in /usr/monailabel, which as a normal user I can’t. Is there a major config file where I can specify these? Or all of these instructions assume that installation is done as the root (or admin)?

This is where I stand as a normal user install.

  1. Hard coded the monailabel script to
  2. After this, when I type:
    ~/.local/bin/monailabel apps --prefix ~/.local
    this is what I get
Using PYTHONPATH=/home/maga:
Python 2.7.18
Available Sample Apps are: (/home/maga/.local/monailabel/sample-apps)
Deepedit based Apps
  deepedit                      : /home/maga/.local/monailabel/sample-apps/deepedit
  deepedit_multilabel           : /home/maga/.local/monailabel/sample-apps/deepedit_multilabel

Deepgrow based Apps
  deepgrow                      : /home/maga/.local/monailabel/sample-apps/deepgrow

Standard Segmentation Apps
  segmentation                  : /home/maga/.local/monailabel/sample-apps/segmentation
  segmentation_left_atrium      : /home/maga/.local/monailabel/sample-apps/segmentation_left_atrium
  segmentation_spleen           : /home/maga/.local/monailabel/sample-apps/segmentation_spleen
  1. Spleen dataset downloaded and installed fine:
~/.local/bin/monailabel datasets --download --name Task09_Spleen --output datasets
Using PYTHONPATH=/home/maga:
Python 2.7.18
Directory already exists: datasets/Task09_Spleen
  1. Can’t get the deepedit to download, nor can start the server
~/.local/bin/monailabel apps --download --name deepedit --output apps
Using PYTHONPATH=/home/maga:
Python 2.7.18
Traceback (most recent call last):
  File "/usr/lib/python3.8/", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/", line 87, in _run_code
    exec(code, run_globals)
  File "/home/maga/.local/lib/python3.8/site-packages/monailabel/", line 333, in <module>
  File "/home/maga/.local/lib/python3.8/site-packages/monailabel/", line 108, in run
  File "/home/maga/.local/lib/python3.8/site-packages/monailabel/", line 161, in action_apps
    apps = os.listdir(apps_dir)
FileNotFoundError: [Errno 2] No such file or directory: '/usr/monailabel/sample-apps'

~/.local/bin/monailabel start_server --app apps/deepedit --studies datasets/Task09_Spleen/imagesTr
Using PYTHONPATH=/home/maga:
Python 2.7.18
APP Directory apps/deepedit NOT Found

1 Like

Hi @muratmaga,

From what I can see, you’re working with Python 2.X (Python 2.7.18). As shown in the instructions, please use Python 3.X. My recommendation is you either create a virtual environment with a Python version 3.X or you work with the docker container.
Once that is done, you could download the DeepEdit App as you’ve already tried:

monailabel apps --download --name deepedit --output apps

or directly download it from the main repo.

It is a python3 environment. I even setup the update-alternatives to make default python python3. But anyways, I am getting issues witht the docker too: This inside output after running the docker command:

root@magalab-ML:/opt/monai# monailabel apps --download --name deepedit --output apps
Using PYTHONPATH=/opt:
Directory already exists: /opt/monai/apps/deepedit
root@magalab-ML:/opt/monai# monailabel datasets --download --name Task09_Spleen --output datasets
Using PYTHONPATH=/opt:
Directory already exists: datasets/Task09_Spleen
root@magalab-ML:/opt/monai# monailabel start_server --app apps\deepedit --studies datasets\Task09_Spleen\imagesTr
Using PYTHONPATH=/opt:
APP Directory appsdeepedit NOT Found

server seems to work, if the full correct path is provided in the command that invokes the server like this:

monailabel start_server --app /opt/monai/apps/deepedit --studies /opt/monai/datasets/Task09_Spleen/imagesTr

1 Like

Superb! Thanks for letting us know :slight_smile:
Please remember that if you want to work with multilabel DeepEdit, you should use this App.
We are in the process of merging both single and multilabel into a single App so it is easier for users.
When using the multilabel DeepEdit App, please change the label names here: MONAILabel/ at main · Project-MONAI/MONAILabel · GitHub

I do have a comment about documentation.

It would be better if you specify the filepaths using the standard convention with slash /, as opposed to using windows convention of backslash \. Slash is more universal and most windows applications (like Python) will interpret the windows path with C:/ still correctly, whereas in Unix \ is interpreted as an escape character. It took me a while to figure out that the path errors are not real path errors, but simply interpreted incorrectly because examples are probably written on windows platform and same instructions apply to both OSes.

1 Like