How to start with monailabel for new models

I have successfully run the vanilla docker and the SlicerMonai. Server communicated and found the apps etc (I didn’t try to execute it). Now I am modifying the docker to do your requested changes to make it work with my data organized in the way requested above (labelmaps are in labels/final and have the identical name and format as the corresponding volumes).

  1. I ran the docker to map this directory which was mapped as /workspace/murat_data
    sudo docker run -it --rm --gpus all --ipc=host --net=host -v $PWD:/workspace/murat_data projectmonai/monailabel:latest bash

  2. Then I download the deepedit_multilabel apps to the docker via command
    monailabel apps --download --name deepedit_multilabel --output /workspace/apps/

  3. Then I edit the main.py in the /workshape/apps/deepedit_multilabel to match my label incides:

self.label_names = {
                "left lung": 1,
                "cranial lobe": 2,
                "middle lobe": 3,
                "caudal lobe": 4,
                "accessory lobe": 5,
                "left kidney": 6,
                "right kidney": 7,
                "stomach wall": 8,
                "stomach lumen": 9,
                "medial lobe of liver": 10,
                "left lobe of liver ": 11,
                "right lobe of liver": 12,
                "caudate lobe of liver": 13,
                "left adrenal": 14,
                "right adrenal": 15,
                "rectum": 16,
                "bladder": 17,
                "left ventricle": 18,
                "right ventricle": 19,
                "left thymic rudiment": 20,
                "right thymic rudiment": 21,
                "third ventricle": 22,
                "mesencephalic vesicle": 23,
                "fourth ventricle": 24,
                "cerebral aqueduct": 25,
                "left lateral ventricle": 26,
                "right lateral ventricle": 27,
                "right olfactory bulb": 28,
                "left olfacotory bulb": 29,
                "right thalamus ": 30,
                "left thalamus": 31,
                "right hypothamalus ": 32,
                "left hypothalmus": 33,
                "right septal area": 34,
                "left septal area": 35,
                "left neopallial cortex abd amygdala": 36,
                "right neopallial cortex and amygdala": 37,
                "right striatum": 38,
                "left striatum ": 39,
                "right ventricular zone": 40,
                "left ventricular zone": 41,
                "pons": 42,
                "background ": 0,
}
  1. Then I run the server via the command:
    monailabel start_server --app /workspace/apps/deepedit_multilabel/ --studies /workspace/murat_data/

which gives me an error about indentations. I created these with the tabs, but apparently it is not right. and also in some cases we have very different labels. I don’t think it makes too much sense to hard code these to the code itself. Can I request a revision such that the labels are read as like a parameter file, like csv or a json at the run time, so that we can possibly run multiple copies of multilabel for different tasks with minimal revision.

Our plan is to deploy this application for a number of biologists that will be using multilabel deepedit for different segmentations tasks. So that kind of flexibility will be important.

2 Likes

Thanks for reporting this @muratmaga. It helps us a lot to improve.
This is definitely something we’ll work on. We’d also like to join the single and multilabel DeepEdit App so it is easier for the user.
It may not the reason for the indentation error, but would it be possible to try this app removing the spaces after label names? (i.e. background, left striatum, right hypothamalus, right thalamus, left lobe of liver)
Also, please try with 4 spaces instead of tabs. Some IDEs are difficult with this.

The server (via docker image) is up and seems to be running. I even get some results. However, documentation of the Slicer extension is very minimal. What are the next steps? How do I make use the existing labels to train the model?

1 Like

Thanks, @muratmaga. I’m happy to see this working.
You’re right, documentation of the Slicer module needs more work. This is something we’ll improve for the next version of MONAI Label along with the user interaction for DeepEdit Apps.

Could you please open an issue about this in the main repo? In this way, we can keep track of the most important bits.

Thanks in advance!

3 Likes

Did you get this settled @muratmaga , if yes - how ? Thanks.

1 Like

We did manage to get it working eventually with @diazandr3s help, but it has been a while and I need to check my notes.

Where are you encountering the issue?

1 Like

To clarify, I ended up using docker as it simplified everything:

I used a command like this to specify the GPU for the docker instance to run on, as well as the map persistent folders between my host environment and the docker.

sudo docker run --cpus 32 -it --rm --gpus device=GPU-7fa21545-aa7c-728f-5925-0c0cf8f8f8a8 --ipc=host --net=host -v /home/maga/komp_monai/:/workspace/ projectmonai/monailabel:latest bash

Then, I would invoke the monai server within the docker shell with a command like

monailabel start_server --app /workspace/apps/komp_more/ --studies /workspace/mouse_fetus

In this case, komp_more is the multi-label app edited to match my label indices of the data (this is hard coded to the multi label deepedit python script), and mouse_fetus is where the volumes and associated label maps sits…

I am installing monailabel with

pip install monailabel-weekly

but when downloading “radiology”

monailabel apps --download --name radiology --output apps

I ran into

Using PYTHONPATH=/home/rbumm:
App radiology => /usr/monailabel/sample-apps/radiology not exists

which seems to be caused by the fact monailabel can not create subfolders in /usr/monailabel/ folder due to missing access rights.

Yes, that i why I gave up using directly on the host but going down the docker route. I didn’t want to modify my working environment.

Thank you @muratmaga. This would be a possible option if I can not get monailabel running via PIP install. However, I would prefer to find a valid pip install option for monailabel as it is described in the docs.

Where did you get the device = GPUxxx parameter from?

1 Like

For anyone interested:

Got monailabel install and server working in Windows 10 WSL Ubuntu by

git clone https://github.com/Project-MONAI/MONAILabel
pip install -r MONAILabel/requirements.txt
export PATH=$PATH:`pwd`/MONAILabel/monailabel/scripts

then

# download radiology app and sample dataset
monailabel apps --download --name radiology --output apps
monailabel datasets --download --name Task09_Spleen --output datasets

and then starting the monailabel server by

# start server using radiology app with deepedit model enabled
monailabel start_server --app apps/radiology --studies datasets/Task09_Spleen/imagesTr --conf models deepedit

The pip install monailabel process did not succeed, although it would probably be the better solution for new monailabel users.

3 Likes

you can either use the ordinal listed (0, 1, 2) in the nvidia-smi, but using UUID is more reliable (order may change based on which device gets initiated earlier during reboot, or driver change).

PS C:\Users\murat> nvidia-smi -q

==============NVSMI LOG==============

Timestamp : Fri May 20 09:13:04 2022
Driver Version : 510.06
CUDA Version : 11.6

Attached GPUs : 1
GPU 00000000:01:00.0
Product Name : NVIDIA GeForce GTX 1650
Product Brand : GeForce
Product Architecture : Turing
Display Mode : Disabled
Display Active : Disabled
Persistence Mode : N/A
MIG Mode
Current : N/A
Pending : N/A
Accounting Mode : Disabled
Accounting Mode Buffer Size : 4000
Driver Model
Current : WDDM
Pending : WDDM
Serial Number : N/A
GPU UUID : GPU-7592dcbd-7989-64a5-f5e8-05730e7d713b

1 Like

Thanks, @muratmaga and @diazandr3s. I probably have a couple of questions concerning monailabel later :slight_smile:

1 Like

@diazandr3s

Trying to train a model to detect “right lung”, “left lung” and “airways” with monailabel.
This is my workflow, and most of this works:

  • Set up a “segmentation_lung” model by modifying “segmentation_spleen.py”, download the Task06_lung dataset, and start the monailabel server with
monailabel start_server --app apps/radiology --studies datasets/Task06_Lung/imagesTr --conf models segmentation_lung
  • Monailabel extension: Choose a “random” strategy and load random lung datasets into 3D Slicer with “next sample”.
  • Do a LungCTSegmenter segmentation and produce “right lung”, “left lung” and “airways” for 17 / 63 cases. The naming corresponds to the one used in my segmentation_lung.py file and prior to segmentation I see these as empty segments in monailabel´s segment editor instance.
  • After each segmentation: “Submit label” without error.

My - probably newbie - questions, sry:

Would this be a good workflow?
When do I have to press “Train”?
Why does accuracy stay zero after a “Train”?
How can I see the performance of my newly trained monailabel model?
Where is/is there a resulting *.pt file?
or generally - where is the result of my training work when I finish the server - can I save or export it?

Hi @rbumm - thanks for sharing your experience. I don’t have all the answers but hopefully I can answer some and others can also chip in. My experiment was to try training on this brain data.

As I understand it yes, this is what is intended.

I understand you can click train at any point and continue to submit more labels as it goes.

Here I don’t know. My experience with the pre-labeled images was that one round of training got to about 70% in a day on a mid-range GPU and another day of training got to 80%.

I was able to load a new image that had not be labeled yet and click the Run button to see that result of the model. The results were promising, but not really usable.

For me on ubuntu they are in ./.local/monailabel/sample-apps/deepedit_multilabel/model

I know the plan ultimately is to have MONAI Deploy as an option for this, but I haven’t seen it done yet.

2 Likes

Thank you.

Probably this fails on my desktop because

>>> import torch
>>> torch.cuda.is_available()
False
1 Like

That cold be it. On my machine (ubuntu with an nvidia gpu) cuda is available, but I would have thought pytorch would fallback to cpu, even if that’s slower.

1 Like

Many thanks for sharing your experience, @rbumm. It’s great to see this working on your end.

The pip install monailabel process did not succeed, although it would probably be the better solution for new monailabel users.

I agree with this.

The radiology app is available from version 0.4.0 onwards. I suggest you install the release candidate 3 for version 0.4.0. This means, pip install monailabel==0.4.0rc3

Sorry, I didn’t give you the correct version in the last message.

With regards to your questions:

Would this be a good workflow?

Yes, that’s a good workflow.

MONAI Label usage could be customized depending on the user type, though. We’re working on this for the next release (i.e. multi-user workflow)

When do I have to press “Train”?

This is a good question. You could trigger the training every time you add a new label. But is up to the user. You could also wait until you label 5 or 10 new labels.

Why does accuracy stay zero after a “Train”?

This shouldn’t be zero. You should see the accuracy obtained during training.

How can I see the performance of my newly trained monailabel model?

The idea is that the annotator will take less time to annotate/edit the predictions obtained for the new unlabeled images. You could use a percentage of the training images for validation.

Where is/is there a resulting *.pt file?

or generally - where is the result of my training work when I finish the server - can I save or export it?

You should see a folder called model under the radiology app folder.

Many thanks, @pieper for your comments. They help a lot :slight_smile:

Deeply hidden on the internet:

One needs to run a Windows Insider build 20145 or superior (->Windows 11) in order to use CUDA in WSL2. See here and here.
After updating to Windows 11,

>>> import torch
>>> torch.cuda.is_available()
True

:slight_smile:
and monailabel training succeeded. Accuracy increases during training.

@diazandr3s would I need to do an auto segmentation with each new dataset to see the improved monailabel capabilities/accuracy?

The annotator ? Please explain.

1 Like
Deeply hidden on the internet:

One needs to run a Windows Insider build 20145 or superior (->Windows 11) in order to use CUDA in WSL2. See [here ](https://stackoverflow.com/questions/64256241/found-no-nvidia-driver-on-your-system-error-on-wsl2-conda-environment-with-pytho) and [here](https://docs.nvidia.com/cuda/wsl-user-guide/index.html).
After updating to Windows 11,

and monailabel training succeeded. Accuracy increases during training.

I’m glad this is working :slight_smile:


@diazandr3s would I need to do an auto segmentation with each new dataset to see the improved monailabel capabilities/accuracy?

If you have labels in the new dataset to compute Dice or another metric, then yes - that could be a way of checking model performance.

If you don’t have labels in the new dataset, a way you could evaluate the model is by computing the time taken by expert/clinician to segment new volumes/images.

The annotator ? Please explain.

I meant the radiologist/clinician using MONAI Label :slight_smile: