My lab has a server with multiple graphics cards. I am doing machine learning related operations, but slicer seems to work on graphics card 0 by default. Recently, many people are running programs, and my Slicer always prompts out of memory. How can I change the working graphics card of the Slicer?
I have tried the code:CUDA_VISIBLE_DEVICES=1, but it seems to help nothing…
Thanks for any help!