Monai label. CUDA out of memory

I have a question about Monai Label. Maybe somebody has faced the similar problem.

  1. I use deepedit from radiology app without any global code changes on my tower-server with Asus Nvidia GeForce RTX 3080 10GiB.
    I try to start the inference with the batch size equal to 1.
    Input tensor is 512x512 with 512 images.
    But I’m stuck in the “CUDA out of memory” error.
    See IMG_5625.jpeg in the link from Google drive:
    IMG_5625.jpeg - Google Drive
    See output_cuda_out_of_memory.txt in the link from Google drive:
    output_cuda_out_of_memory.txt - Google Drive

  2. By the way, if I will uncomment line #275 it will work on CPU successfully.
    See IMG_5623.PNG in the link from Google drive:
    IMG_5623.PNG - Google Drive

a) Is it real to work with Monai Label on Asus Nvidia GeForce RTX 3080 10GiB?
b) Maybe it possible to make some changes in code somewhere that will help Monai to distribute the computation load im my GPU memory and do this job with it, instead of CPU?
:pray:

Kind regards,
Dalv Silvermann.

1 Like

Hi Dalv,

you should be able to run MONAILabel deepedit inference in GPU mode with the system settings you specified.

Please look for the following file and line in your local MONAILabel installation:

Try to set the spatial size lower to avoid memory errors, maybe starting with

spatial_size=(32, 32, 0),

Hope that helps

@diazandr3s

2 Likes

Hi @dalv.silvermann,

Thanks for reporting this.

I’ve checked the logs and saw that the image size is (550, 550, 450) - quite a big image for the GPU you have.

DeepEdit model uses the whole image to train and perform inference and that’s why it needs more GPU memory than other models.

As @rbumm recommended, you could reduce the image size here. But in that case you won’t be able to use the pretrained model as the input size changes - you should re train the model.

Otherwise, I’d recommend you use the segmentation model instead, which works on patches: MONAILabel/sample-apps/radiology at main · Project-MONAI/MONAILabel · GitHub

Hope that helps,

2 Likes

Hi all,
interesting thread, huge images indeed! However, from the logs it seems that the image already gets downsampled to (1,128,128,128), but that might still be too large for backprop. I would try (96,96,96) first, and if that’s still too large, maybe (80,80,80), or worst case (64,64,64) should work (but at that point you’d probably notice considerable staircase artifacts in the prediction).
To work at a higher resolution, it is probably best to work with the segmentation model, as @diazandr3s recommended. In that case you can set your patch size to e.g. (96,96,96) (whatever fits into GPU VRAM), and play around with the target_spacing parameter (here) to make sure that you get a good compromise between resolution and FOV of the patches.
Good luck!

1 Like

As I recall, number of labels also factor in the memory usage. So if you have a multi-labeled image your memory usage will be higher than a model that uses a single label.

2 Likes

Sorry for taking so long for reply.

  1. Now we use (64,64,64) and it still goes “Out of Memory”.
  2. Now I’m thinking about the second GPU for example Msi GeForce RTX 3090 Ventus 3x 24G OC.
    And then we will have GPU0 and GPU1 with 34G in total. What do you think about this way?
  3. Does anybody works with Monai with more than one GPU with computation distribution? How we can configure this scheme?
    Do you have some links for examples?

Thanks for all of you for support!

There are tweaks you can do, one of which I think is to reduce the precision of floating points so you double memory, but the reality is the Nvidia is knowingly keeping the geforce line of gpus with insufficient memory for them not to compete in this domain (ML). You will make yourself a favor if you can move to something like RTX A6000 which provides double memory at 48GB.

I do not know about distributed workload, but I suspect if your model doesn’t fit in the memory of one gpu, it won’t work. I dont think they “pool” the memory.

1 Like