Setting visibility to off in Volume Rendering does not unload the volume from texture memory

As it is, once the volume is loaded onto the GPU texture memory, turning off its visibility does not unload it. Any subsequent volumes selected to render in the same session add to this memory usage (I am tracking through nvidia-smi). As far as I can tell, the only time they are unloaded is when the node is deleted, or scene is reset.

This behavior makes perfect sense in terms of performance and I don’t think it shouldn’t be changed. But I also think it might good to have a way to modify this behavior on-demand. It is a corner case, but an important one for people rendering large volumes on shared systems (linux). A typical use case would be running multiple docker instances using the same GPU.

1 Like

Good point - as a workaround for now you can toggle between CPU and GPU rendering and that will flush the volumes from GPU memory.

It does, but adds up quite a bit of time, because you need to enable CPU rendering, not just switch.

If you make the volume invisible first then switching is fast.

I’ve added an “Auto-release resources” checkbox to volume rendering module. If you check that that hiding a volume rendering also releases all graphics resources. Pull request is submitted: https://github.com/Slicer/Slicer/pull/5352

Nope, doesn’t work with latest stable. It only flushes after it visibility set to on.

For me it works by toggling when the volume is not visible - this video shows nvidia-smi while doing the operations.

Himm, that’s not the behavior I see on Linux stable. Perhaps some driver etc version difference. In any event, @lassoan’s solution seems a good one. Particularly it can be set to be default, through .slicerrc.py or some sort of a starting script.

Default state of auto-release flag can be set in application settings the same way as other volume rendering settings (surface smoothing, etc).

2 Likes

Haha, while we were trying to still investigating Andras went ahead and fixed it! Thanks @lassoan :+1:

I tried this on Linux r29631, and I didn’t see a difference in behavior when “auto release resources” is checked. I enabled volume rendering for MRHead. Checked with nvidia-smi that Slicer was using 104MB of texture memory, then I set its visibility off and rerun nvidia-smi which still reported 104MB being used by Slicer. If I delete the MRHead node, then this memory is released, but toggling visibility on/off doesn’t seem to unload the volume from GPU memory.

I see GPU memory usage decrease in Windows using renderdoc if I hide volume rendering while “Auto-release resources is enabled” (using Slicer 4.13.0-2021-01-16 (revision 29612 / d264109) win-amd64 - installed release).

Memory usage changes in Task Manager (Shared GPU memory), too, but those reports are less consistent. After a volume is hidden or removed not all memory is released, but when loading volume again, the memory is reused. So, it seems that the currently allocated memory is reported rather than the actually used amount. Maybe nvidia-smi reports something similar.

I would recommend to try volume rendering of very large volumes in multiple processes to confirm that memory is actually released when needed; or use a tool like renderdoc to get more reliable information about the actual state of the rendering pipeline.


If you want to try renderdoc:

  • launch renderdoc using Slicer --launch qrenderdoc
  • launch Slicer from it using File / Launch Application → SlicerApp-real
  • capture frames before/after rendering and show/hide, etc.
  • load each capture and check “Statistics” tab

For example:

After loading CTChest:

12 Textures - 27.05 MB (27.05 MB over 32x32), 6 RTs - 20.54 MB.
Avg. tex dimension: 963.333x758.333 (1155.6x909.6 over 32x32)
2 Buffers - 0.00 MB total 0.00 MB IBs 0.00 MB VBs.
47.59 MB - Grand total GPU buffer + texture load.

After showing volume rendering (140MB total GPU memory usage):

17 Textures - 166.07 MB (166.05 MB over 32x32), 7 RTs - 23.34 MB.
Avg. tex dimension: 936.4x506.5 (1048.33x843.333 over 32x32)
15 Buffers - 0.00 MB total 0.00 MB IBs 0.00 MB VBs.
189.41 MB - Grand total GPU buffer + texture load.

After hiding volume rendering (back to where it was before volume rendering):

12 Textures - 27.05 MB (27.05 MB over 32x32), 6 RTs - 20.54 MB.
Avg. tex dimension: 963.333x758.333 (1155.6x909.6 over 32x32)
9 Buffers - 0.00 MB total 0.00 MB IBs 0.00 MB VBs.
47.59 MB - Grand total GPU buffer + texture load.

After deleting the volume from the scene (no change in GPU memory usage):

12 Textures - 27.05 MB (27.05 MB over 32x32), 6 RTs - 20.54 MB.
Avg. tex dimension: 963.333x758.333 (1155.6x909.6 over 32x32)
15 Buffers - 0.00 MB total 0.00 MB IBs 0.00 MB VBs.
47.59 MB - Grand total GPU buffer + texture load.

I think VirtualGL I am using for remote connection is interfering with debugging. After launching SlicerApp-real, there is nothing to capture, and qrenderdoc shows lots of these errors:

Core 21855 15:14:50 gl_driver.cpp(3141) Log Got a Debug message from GL_DEBUG_SOURCE_API, type GL_DEBUG_TYPE_ERROR, ID 1282, severity GL_DEBUG_SEVERITY_HIGH:
Core 21855 15:14:50 gl_driver.cpp(3141) Log 'GL_INVALID_OPERATION error generated. Object is owned by another context and may not be bound here.'

I will try with a regular installation later.

Confirmed. WHen working with really large volumes (>10GB+), I can see memory being released with Nvidia-smi when I uncheck the visibility. As you guessed MRhead was too small to see the differential.

Awesome, thank you!

1 Like