VTK multivolume/cinematic volume rendering

If I want to use models in my merged sequence…
I believe that the group of exported models is not a data node, but rather a folder of models.
Is it possible to add the folder of models to a sequence.
I use SetDataNodeAtValue(segmentationNode,n) for segmentations, but it does not work for the folder of models.

You need to add each model to its own sequence. You can then add all sequences to the same sequence browser node.

As you also found, the regular GPU raycasting handles one volume at a time, and the order of rendering is always the same, so the depth information between the different rendered volumes is lost (one always appears “in front of” the other). Multi-volume rendering considers all the volumes at the same time when rendering, thus showing realistic depth even if the volumes overlap. The problem with multi-volume rendering is that its development in VTK stopped halfway, so besides some bugs (some of which have been worked around in the Slicer code to make it function), there are some important features missing, such as cropping. We re-added shading, but I believe it’s not perfect either yet.

A very interesting conversation! Do you plan to publish the results of your work later somewhere? I think many Slicer users are interested in approximating cinematic rendering using the platform.

@cpinter is bringing a good point regarding multivolume support in VTK. It is definitely missing many features that have been implemented for single volume rendering, such as the scattering shading model. We recently added support for mixing RGBA volumes with multivolume, so the development should not be considered stopped. However, it highlights the fact that even small feature like RGBA volume support are missing with vtkMultiVolume, probably because the two paths (single vs multivolume) are too decoupled. I am happy to discuss further this technical point if needed, or assist/review potential contributions.

Thank you for your reply.

After searching the forum… I always end up agreeing with Andras Lasso.
With one exception. (there should be support for *.seg.nii.gz).

CR is nice for twitter, patient education and exploration. I will explore model rendering further, but converting all my segmentations to models and controlling the rendering individually will take some coding.

I am by no means an expert in CR.
If people are interesting in CR specifically in Slicer, I would check out paraview if technically inclined and volview if not. There is a strong relationship between slicer, kitware vtk, paraview, etc. Paraview is excellent for data exdploration. Paraview is most advanced.

Other options are Blender and Mevislab.

Mevislab has many awesome renderings on their webpage, but I am unable to find the example networks. If anyone from Mevislab is reading this, please provide example networks for all renderings.

Yesterday I created a new module for generating a full-color RGBA volume, by combining a segmentation and a scalar volume using some smart filtering and masking. Combined with a fix in the VTK RGBA volume renderer, I find that it creates beautiful renderings that hasn’t been possible before; and it surpasses many “cinematic renderings” in the coloring aspect.

For example, TotalSegmentator segmentation results on the CTLiver sample data set are rendered like this:

Colorized, masked Colorized, masked Colorized, not masked

Just for reference, surface rendering provides much less texture details - see below. However, what works well for surface rendering (and is not available for volume rendering) is screen-space ambient occlusion (making things that are behind other objects appear darker).

Surface Surface with screen-space ambient occlusion

I think this is the only thing that would be necessary to make stunning volume renderings in VTK. Screen-space ambient occlusion is also very fast, so it could be usable not only for rendering videos, but for real-time rendering for surgical guidance, virtual reality, etc.

I’ve tried the VTK’s scattering shading models, but they are slow, the results look very artificial and low-quality (almost as if there was no shading). It is not just my settings - all the showcased renderings in the Kitware blog all look pretty bad to me. To me, it looks like some interesting effects applied on volume rendering, which make the rendering appear less realistic than basic volume rendering. Scattering also hangs on my laptop and hangs or crashes on my desktop (with RTX3060) after a short while when I’m trying to adjust parameters. Also, it does not seem to work at all for RGBA volumes.

@LucasGandel Are there any better examples of VTK’s scattering shader where the resulting rendering look more realistic? Is there a chance that screen-space ambient occlusion will be made available for volume rendering? (the Z buffer generated by the volume renderer could be used for the occlusion computation)


@lassoan Awesome work! The texture details look great! Such representations should definitely be considered.

Although I agree that the computation time of the scattering model is problematic, one of the nice effect I like is the shadows/ambient occlusions it brings, which seems obvious to me on the screenshots of the article you referenced. The interesting point you bring is that this makes the final rendering looks less realistic than initially. I would love to discuss this further to understand what makes you think that. If you have concrete examples, please share.

Finally, I love the idea you propose regarding the use of SSAO for volume rendering to improve further the results you already have. Technically, writing the normals and positions as done for the polydata mapper could be enough. I’ll try to investigate further.

FYI, looks like this didn’t make it into today’s preview build, neither in Linux nor in Windows. I am very excited to give it a try…

@lassoan looks like CDASH is showing a configure error for the sandbox extension for the colorize volume module.


In this blog post the issue may be that the plain volume rendering of the medical images are quite messy, not showing anything clearly or realistically. When the shadows are added then the images become even more complex and harder to interpret and some areas become overexposed/underexposed in certain regions. Maybe the issue is also that there was no particular purpose of the visualization and so random details of the volume got highlighted/suppressed.

The images in this blog entry look OK with basic gradient shading but appears faded, washed out when the volumetric shading is used. In case of the hybrid mode, the image loses details.

I would like to play a bit more with these options, but Slicer and ParaView hangs or crashes when I play with the parameters. Also, I would be most interested trying RGBA rendering and that does not seems to work at all for RGBA.

This would be awesome.

Thanks for testing. In addition to the module, a VTK fix that we worked on with @pieper must be also integrated. We usually fix issues in VTK first and then cherry pick from there to Slicer, but we can make exceptions if it’s something urgent.

Thanks a lot for the great feedback. I will forward it to the VTK experts in charge of volumetric rendering to open the discussion.

Adding support for RGBA volumes and improving the robustness in Slicer and Paraview are additional topics that can probably be handled without too much effort.

Regarding SSAO, a very simple example adding a volume and a SSAO pass result in OpenGL errors because of incorrect FBO bindings. This would have to be investigated further, but besides that the required information (normals and positions) is already available in the existing shader. A few days are probably needed to get a POC.


@LucasGandel is this something that you or someone from Kitware would look into in the near future?

@drouin-simon Do you think one of your students could explore this? It seems that it may not be a lot of work and could have a huge impact (fast volume rendering with much improved depth perception).

@muratmaga I’ll try to get all the necessary pieces merged today so that you can start playing with it from tomorrow.


@LucasGandel is this something that you or someone from Kitware would look into in the near future?

I don’t think I’ll manage to find funding/time for this in the near future unfortunately, but I’m happy to answer technical questions. I agree it is worth giving it a try as the effort seems reasonable and it might just work.
The bigger part is probably to solve the current FBO binding issue while adding multiple render target support to the GPU volume mapper. Then writing the position and normal when the opacity threshold is reached is probably a good start.

I just tried and got this error message with these settings:

I am using data from GitHub - muratmaga/mouse_CT_atlas: Analytical pipeline for skull shape analysis in adult mice (the contents of template folder).

Traceback (most recent call last):
  File "/Users/amaga/Desktop/SlicerPreview.app/Contents/bin/Python/slicer/util.py", line 3146, in tryWithErrorDisplay
  File "/Users/amaga/Desktop/SlicerPreview.app/Contents/Extensions-32228/Sandbox/lib/Slicer-5.5/qt-scripted-modules/ColorizeVolume.py", line 229, in onApplyButton
  File "/Users/amaga/Desktop/SlicerPreview.app/Contents/Extensions-32228/Sandbox/lib/Slicer-5.5/qt-scripted-modules/ColorizeVolume.py", line 348, in process
    dilate.SetKernelSize(dilationKernelSize, dilationKernelSize, dilationKernelSize)
NameError: name 'dilationKernelSize' is not defined

Looks like that variable didn’t get set. Probably it needs to be exposed in the UI. For now you can just comment out this line (put # at the front) in /Users/amaga/Desktop/SlicerPreview.app/Contents/Extensions-32228/Sandbox/lib/Slicer-5.5/qt-scripted-modules/ColorizeVolume.py", line 348. or set the variable manually.

OMG! This is a game changer for us.

We just now a way to adjust TF per segment basis somehow… And regular shadows/lights work great, just need the ambient shadows as @lassoan mentioned.


gorgeous image! Keep 'em coming.

I’m making some changes/fixes. You’ll get even better results! Just need an hour or so.


It’s all ready. You can update manually from github or download the Slicer Preview Release tomorrow to get the updated version.

Example renderings:

Default rendering:

Using gradient opacity:


I am now working home with remote connection to lab server, and while it works with CPU rendering, I am getting bad memory allocation with GPU rendering.

This is probably an issue with virtualGL. However, when I invoke slicer without vgl, I still get the error with GPU rendering (which should use the software renderer). Tomorrow, when I back I will try with windows and report back.

"Slicer has caught an application error, please save your work and restart.

The application has run out of memory. Increasing swap size in system settings or adding more RAM may fix this issue. f you have a repeatable sequence of steps that causes this message, please report the issue following instructions available at https://slicer.org\n\n\nThe message detail is:\n\nException thrown in event: std::bad_alloc"