Some question about multiple graphics cards

My lab has a server with multiple graphics cards. I am doing machine learning related operations, but slicer seems to work on graphics card 0 by default. Recently, many people are running programs, and my Slicer always prompts out of memory. How can I change the working graphics card of the Slicer?
I have tried the code:CUDA_VISIBLE_DEVICES=1, but it seems to help nothing…

Thanks for any help!

For graphics operations Slicer will use the card that’s configured for the X server (e.g. in xorg.conf). If you are running cuda or other GPU code in Slicer you can select whichever devices are detected by the drivers.