Can Multi-Volume Rendering with two volumes overlapped imporved performance?

Hi everybody,
Recently I researched multi-volume rendering.
The situation is I have to render the lung nodule, the vessel and pleura arround the nodule.
The lung nodule is first volume, and the vessel and pleura is second volume.
As the HU value and location in those two volumes exist overlap, but I want to render two volumes with different opacity, color and gradient transfer functions.
When I load two volume nodes, switch to volume rendering module, set the proper opacity, gradient and color transfer func seprately, and choose “VTK GPU Ray Casting”, two volumes could be rendered well as expected. But defect of this render method is lack of Spatial location, and render result seems not real.
Because the lung nodule is inside the pleura, but when I rotate the volumes in 3d view, in some angle, the lung nodule would be hided by the pleura, not inside the pleura.

When I choose the “VTK Multi-Volume (experimental)” method, the two volumes could be rendered in the same time, and the Spatial location is right even though I rotate the volumes many times.
But the defect of this multi-volume render method is, the detail of volume disappeared, the vessel and pleura became not continuous and blurred.

When I test the vtkMultiVolume outside the Slicer, use the vtkMultiVolume and vtkGPUVolumeRayCastMapper class, set the same parameters for two volumes, the rendering result is as expected.

So I want to know the differences between the multiVolume rendering in Slicer and use vtk class directly? Why the effect in Slicer 3d view is lower than in VTK render window? And how can I improve the vtkMultiVolume rendering performand in Slicer?
Any suggestion will be appreciate~
And thanks to @lassoan @muratmaga @pieper @cpinter in advance, expect to your
professional answer and advices.

Thanks for testing and reporting :+1: I haven’t looked at the details myself but I’m sure there are issues with the multivolume rendering that need to be investigated and fixed. The feature was added to Slicer a few years ago but VTK has been upgraded since then and I don’t know if we are taking advantage of any recent fixes. Based on the images you showed it could be some of the logic around updating sample distances or similar.

1 Like

I have forgotten to tell the Slicer version and vtk version.
What I have tested VTK Multi-Volume rendering, is in the Slicer-4.13.0-2022-01-16-linux-amd64 Nightly build version, and from the python terminal, I got the vtk.vtkVersion().GetVTKSourceVersion() is ‘vtk version 9.0.0’.
Certainly, I also download the latest Slicer, version is: Slicer-5.1.0-2022-06-24-linux-amd64, and vtk version is also 9.0.0. But the multi-volume rendering effect is also not improved.

I’ve checked this and I was able to reproduce the issue. It seems to be a known issue (see MultiVolume Rendering quality is very low if volume spacing is small · Issue #5238 · Slicer/Slicer · GitHub). I’ll fix this, but it is not a simple problem (it involves fixing multi-volume rendering in VTK or complex workarounds in Slicer - see details below).

Fortunately, for your use case, multi-volume rendering is not needed. General anatomical environment can be visualized nicely using volume rendering, but lesions, devices, other structures of interest are typically displayed as segmentation. Segmentation has many advantages, including clear visualization in both slice and 3D views, good-quality visualization even if the structure has low contrast or wide intensity range, and the segmentation can be used for quantification (volume, intensity statistics, radiomics features, etc.).

Details about the multi-volume rendering issue: Slicer always instructed VTK to use adaptive rendering quality, which ended up choosing somewhat lower quality rendering (larger sampling distance) than the default single-volume volume renderer. When I’ve fixed this logic it turned out that the automatic sampling distance computation in the multi-volume renderer computes the sampling distance based on the very first volume. In Slicer, this volume is an empty volume because VTK has a bug that the first volume’s transform is not taken into account. I’ll have to see if it is easier to fix the issues in VTK or develop more complicated workarounds in Slicer. This will probably take a few weeks to sort out. You can track the status of this issue here:

1 Like

Thank you for your professional advice. @lassoan @pieper
But in my situation, I can’t replace Multi-Volume Rendering with Segmentation show 3D. Becuase density and type of lesion/nodule is different, I will rendering by different opacity, color tranfer funcs.
But if I use segmentation show 3d, the color of the segmentation in 3d view is only one color, not with different color by different diensity.

To my surprise, according to use segmentation show 3d, lack of Spatial location may be solved.
The steps reproduce is as follows:

  1. segmentation show 3d;
  2. set the opacity to 0.01;
  3. Switch to View Controller, set the UseDepthPeeling to True;
  4. Swith to Volume Rendering, and VTK GPU ray casting rendering;
  5. Open the nodule and vessel two different volume to rendering;
  6. OK.

I don’t know why, but the VTK GPU Ray Casting to render two different volume, lack of Spatial Location may be solved, seems to be ok.
I want to know why?

This is exactly why segmentation is even much easier and more appropriate for visualizing the lesions than volume rendering. Volume rendering is essentially simple global thresholding (with a smooth threshold and some coloring and transparency effects). In contrast, you can segment arbitrarily complex shapes with subtle, local, spatially varying contrast, with even incorporating prior information about the expected shape, location, and appearance of the structure.

3D appearance of a single volume rendered image and any number of segments will be correct (each object will only occluding other objects that are behind it). Rendering of segmentations is also much faster than multi-volume rendering.

1 Like

Thank you for your explaination, but I think you may not understand me totally.

Althrough I use segmentation rendering in 3D view, but the opacity of segmentation is set to only 0.01, is totally transparent.
Finally, there are three object to be rendered in 3D view:
(1) nodule volume,
(2) vessel and pleura volume,
(3) nodule segmentation.
As nodule segmentation is nearly to transparent, there are two objects rendered will be seen actually.

What confuses me is that, If I don’t render nodule segmentation before, two volumes rendered by “VTK GPU Ray cast” will lack of spatial location. But, with the help of nodule segmentation rendered and set the UseDepthPeeling to True before (althrough near to transparent), lost relative spatial location will be found.

So actually, I don’t use VTK Multi-Volume rendering to render two volumes, I used VTK GPU Ray casting to render two volume separely, and displayed in the 3D view at the same time, seems to solve VTK Multi-Volume rendering problem, that is: lost of detail and blurred result.

Next, I will explain why I can’t use segmentation rendering to replace two volumes rendering.
(1) Automatic segment, semi-automatic segment or deep-learning segment the nodule or vessel volume will cause lost information more or less, but doctor want to see original volume.
(2) The density of single nodule is not average, we will use different opacity and color transfer function to render the single nodule volume.
(3) We will cut the volume to see the inside of nodule volume. If I use the segmentation rendering, inside is empty, and I can’t see the inside of nodule.

lost relative spatial location will be found

It will not, it will be the result of you seeing the segmentation properly instead of the volume rendering. And maybe it creates an optical illusion for you, but in general you should not use more than one volumes in non-multi-volume mode, because as you also noticed, you lose the depth information between the displayed volumes.

I agree very much that we should use multi-volume rendering (actually I’d also prefer in some cases to show segmentation labelmap as volume rendering), but that mapper is broken, unmaintained, and behind in features compared to the single volume mapper, and it requires a huge amount of work, moreover, a type of work that very few people are capable of doing well.

There was talk recently of restructuring the way the shader code is generated, which, currently is an extremely complex string operation in C++ with too many control branches, and this is why it is becoming unmaintainable. I haven’t heard about concrete roadmaps though, or scope of work.

1 Like