Short answer: cinematic rendering of cardiac ct.
As a first step, I would want to have ~20 structures volume rendered with different adjustable transfer functions. I’ve tried this with two, works fine.
I already have them surface rendered in 3dslicer as a volume and segmentation.
My understanding is that "cinematic rendering’s secret sauce is segmentation followed by multivolume rendering.
Usually we don’t visualize segmentations using volume rendering. There are many reasons for this, but probably the most important is that soft tissues are not well suited for volume rendering because the texture is not directly useful for visualization. The texture just makes the image more messy, does not make things more recognizable.
Instead, typically structures are segmented and the homogeneous segmentation result is rendered in 3D:
If you render opaque segments: surface rendering typically provides much nicer results than volume rendering (there are many shading options and rendering is very efficient).
If you render many segments semi-transparently: it does not matter what rendering you use, because the result is just so complex and hard to interpret that it is rarely useful.
If you render most segments opaque and one or few segments transparently: you can use surface rendering for the opaque segments and may consider volume rendering for the transparent segments. In most use cases, single volume rendering is sufficient, but in rare cases multi-volume rendering might be useful.
Therefore, you probably don’t actually need to set up multi-volume rendering for dozens of volumes ever.
What cardiac structures would you like to render? For what purpose (patient communication, anatomy education, treatment planning, simulation, marketing material, …)? Can you show a few example renderings that you have so far?
I 100% agree with your comments… Why go from a quantitative segmentation to volume rendering…
Big information loss.
Pause…
A) A more realistic render.
B) Glutton for punishment…
C) I explored commercial and non-commercial cinematic rendering. What seems crucial is the segmentation. For example, aorta, heart, coronary segmentation is crucial for good CR. I was hoping for a CR with calcium and valves opaque and the rest of the heart translucent.
Restricting myself to the descending aorta and calcium.
Would you quickly explain the difference between VTK GPU raycasting and VTK multivolume?
Also, does this setting affect shape rendering?
VTK GPU raycasting seems to work with multivolumes… But the opacity does not have the desired effect. As I rotate (180 degrees) the calcium projects right through the aorta.
Volume rendering should work very well for blood pool/calcification segmentation. I don’t think it is necessary to use multivolume rendering for this, as calcium would be detected based on voxel value, so you can use the opacity and color transfer functions.
You can use surface rendering for myocardium and valves. You can make rendering more realistic (“cinematic”) by exporting segmentation to models, choose PBR rendering in Models module, and use features in the Lights module (in Sandbox exension).
If I want to use models in my merged sequence…
I believe that the group of exported models is not a data node, but rather a folder of models.
Is it possible to add the folder of models to a sequence.
I use SetDataNodeAtValue(segmentationNode,n) for segmentations, but it does not work for the folder of models.
As you also found, the regular GPU raycasting handles one volume at a time, and the order of rendering is always the same, so the depth information between the different rendered volumes is lost (one always appears “in front of” the other). Multi-volume rendering considers all the volumes at the same time when rendering, thus showing realistic depth even if the volumes overlap. The problem with multi-volume rendering is that its development in VTK stopped halfway, so besides some bugs (some of which have been worked around in the Slicer code to make it function), there are some important features missing, such as cropping. We re-added shading, but I believe it’s not perfect either yet.
A very interesting conversation! Do you plan to publish the results of your work later somewhere? I think many Slicer users are interested in approximating cinematic rendering using the platform.
@cpinter is bringing a good point regarding multivolume support in VTK. It is definitely missing many features that have been implemented for single volume rendering, such as the scattering shading model. We recently added support for mixing RGBA volumes with multivolume, so the development should not be considered stopped. However, it highlights the fact that even small feature like RGBA volume support are missing with vtkMultiVolume, probably because the two paths (single vs multivolume) are too decoupled. I am happy to discuss further this technical point if needed, or assist/review potential contributions.
After searching the forum… I always end up agreeing with Andras Lasso.
With one exception. (there should be support for *.seg.nii.gz).
CR is nice for twitter, patient education and exploration. I will explore model rendering further, but converting all my segmentations to models and controlling the rendering individually will take some coding.
I am by no means an expert in CR.
If people are interesting in CR specifically in Slicer, I would check out paraview if technically inclined and volview if not. There is a strong relationship between slicer, kitware vtk, paraview, etc. Paraview is excellent for data exdploration. Paraview is most advanced.
Other options are Blender and Mevislab.
Mevislab has many awesome renderings on their webpage, but I am unable to find the example networks. If anyone from Mevislab is reading this, please provide example networks for all renderings.
Yesterday I created a new module for generating a full-color RGBA volume, by combining a segmentation and a scalar volume using some smart filtering and masking. Combined with a fix in the VTK RGBA volume renderer, I find that it creates beautiful renderings that hasn’t been possible before; and it surpasses many “cinematic renderings” in the coloring aspect.
For example, TotalSegmentator segmentation results on the CTLiver sample data set are rendered like this:
Just for reference, surface rendering provides much less texture details - see below. However, what works well for surface rendering (and is not available for volume rendering) is screen-space ambient occlusion (making things that are behind other objects appear darker).
I think this is the only thing that would be necessary to make stunning volume renderings in VTK. Screen-space ambient occlusion is also very fast, so it could be usable not only for rendering videos, but for real-time rendering for surgical guidance, virtual reality, etc.
I’ve tried the VTK’s scattering shading models, but they are slow, the results look very artificial and low-quality (almost as if there was no shading). It is not just my settings - all the showcased renderings in the Kitware blog all look pretty bad to me. To me, it looks like some interesting effects applied on volume rendering, which make the rendering appear less realistic than basic volume rendering. Scattering also hangs on my laptop and hangs or crashes on my desktop (with RTX3060) after a short while when I’m trying to adjust parameters. Also, it does not seem to work at all for RGBA volumes.
@LucasGandel Are there any better examples of VTK’s scattering shader where the resulting rendering look more realistic? Is there a chance that screen-space ambient occlusion will be made available for volume rendering? (the Z buffer generated by the volume renderer could be used for the occlusion computation)
@lassoan Awesome work! The texture details look great! Such representations should definitely be considered.
Although I agree that the computation time of the scattering model is problematic, one of the nice effect I like is the shadows/ambient occlusions it brings, which seems obvious to me on the screenshots of the article you referenced. The interesting point you bring is that this makes the final rendering looks less realistic than initially. I would love to discuss this further to understand what makes you think that. If you have concrete examples, please share.
Finally, I love the idea you propose regarding the use of SSAO for volume rendering to improve further the results you already have. Technically, writing the normals and positions as done for the polydata mapper could be enough. I’ll try to investigate further.
In this blog post the issue may be that the plain volume rendering of the medical images are quite messy, not showing anything clearly or realistically. When the shadows are added then the images become even more complex and harder to interpret and some areas become overexposed/underexposed in certain regions. Maybe the issue is also that there was no particular purpose of the visualization and so random details of the volume got highlighted/suppressed.
The images in this blog entry look OK with basic gradient shading but appears faded, washed out when the volumetric shading is used. In case of the hybrid mode, the image loses details.
I would like to play a bit more with these options, but Slicer and ParaView hangs or crashes when I play with the parameters. Also, I would be most interested trying RGBA rendering and that does not seems to work at all for RGBA.
This would be awesome.
Thanks for testing. In addition to the module, a VTK fix that we worked on with @pieper must be also integrated. We usually fix issues in VTK first and then cherry pick from there to Slicer, but we can make exceptions if it’s something urgent.
Thanks a lot for the great feedback. I will forward it to the VTK experts in charge of volumetric rendering to open the discussion.
Adding support for RGBA volumes and improving the robustness in Slicer and Paraview are additional topics that can probably be handled without too much effort.
Regarding SSAO, a very simple example adding a volume and a SSAO pass result in OpenGL errors because of incorrect FBO bindings. This would have to be investigated further, but besides that the required information (normals and positions) is already available in the existing shader. A few days are probably needed to get a POC.
@LucasGandel is this something that you or someone from Kitware would look into in the near future?
@drouin-simon Do you think one of your students could explore this? It seems that it may not be a lot of work and could have a huge impact (fast volume rendering with much improved depth perception).
@muratmaga I’ll try to get all the necessary pieces merged today so that you can start playing with it from tomorrow.
@LucasGandel is this something that you or someone from Kitware would look into in the near future?
I don’t think I’ll manage to find funding/time for this in the near future unfortunately, but I’m happy to answer technical questions. I agree it is worth giving it a try as the effort seems reasonable and it might just work.
The bigger part is probably to solve the current FBO binding issue while adding multiple render target support to the GPU volume mapper. Then writing the position and normal when the opacity threshold is reached is probably a good start.
Traceback (most recent call last):
File "/Users/amaga/Desktop/SlicerPreview.app/Contents/bin/Python/slicer/util.py", line 3146, in tryWithErrorDisplay
yield
File "/Users/amaga/Desktop/SlicerPreview.app/Contents/Extensions-32228/Sandbox/lib/Slicer-5.5/qt-scripted-modules/ColorizeVolume.py", line 229, in onApplyButton
self.logic.process(
File "/Users/amaga/Desktop/SlicerPreview.app/Contents/Extensions-32228/Sandbox/lib/Slicer-5.5/qt-scripted-modules/ColorizeVolume.py", line 348, in process
dilate.SetKernelSize(dilationKernelSize, dilationKernelSize, dilationKernelSize)
NameError: name 'dilationKernelSize' is not defined