How to perform 3D-cinematic rendering?

Is it possible to perform 3D-cinematic rendering from MDCT by 3D Slicer? Is there any modules to do that?

Hi -

We don’t have cinematic rendering functionality right now, but we’re going to be chatting about infrastructure improvements this afternoon so I added a note to talk about that. Mostly for us it’s always a matter of either funded priorities or community contributions.

Can you tell us a bit more about the application - patient education, understanding complex geometry, just playing around, …? Is there any particular clinical problem that you would like to solve?

Note that “cinematic rendering” is just a fancy name for adding more realistic lighting (most visible difference are shadows) to the volume rendering. For most applications, you could probably get functionally equivalent results (see the same clinically relevant details) with the current volume rendering if you spend some time with tuning transfer functions (in Advanced section of Volume rendering module).

In my experience, it is much easier to apply enhanced rendering to CT modalities (e.g. MDCT) than MRI modalities. The image intensity of different tissues is calibrated for CT, while it is relative to MRI. These advanced renderings have a huge number of adjustable parameters, and this makes a generalized approach difficult. Therefore, while I certainly think one could develop a “cinematic” rendering module for Slicer (given substantial developer resources), it would only serve part of the user community.

A very simple but robust approach that can make both CT and MRI renderings look fancy is to use MatCaps. These are very easy to implement in a volume renderer and have very few parameters to adjust. This figure is generated with MRIcroGL using the MatCaps that come with Blender. I think these could be easily adapted to Slicer. While many MatCaps are a bit gaudy for my tastes, I do think they can help people visualize depth better than traditional volume rendering in some situations.

1 Like

Implementation of surface mesh rendering is certainly easy and photorealistic rendering is widely available in various software applications. If creating of surface mesh (=model node) is not a problem then of course this approach may work well. Surface mesh files can be generated directly from segmentations using “Export to files” feature in Segment Editor.

However, most often creating a surface mesh requires significant effort and many cases (e.g., CT volumes) a number of structures can be displayed directly by volume rendering.

Anyway, if we decide to expose VTK’s photorealistic rendering engine (OSPray) in Slicer (see here) then it will be usable for both surface rendering and volume rendering.

@ lassoan MRIcroGL is a volume renderer. MatCaps work for both voxel-based volume rendering (MRIcroGL) and mesh-based Surface Rendering (Surfice). MatCaps work for either method. The images above were from MRIcroGL. The image below shows MRIcroGL using MatCaps for a high quality volume rendering (left) and low quality rendering (right). The low quality ray casting does not jitter start position and has a thick step size (with correspondingly opaque opacity correction). One uses the voxel gradients to select the MatCap sampling. The use of gradients is described in Chapter 5 of Engel et al.'s (2006) Real-Time Volume Graphics as well as on my web page.

1 Like

Excellent images @Chris_Rorden - would love to see this technique in Slicer/OHIF-vtk/vtkjs

@pieper and @lassoan -
Since MRIcroGL is a complicated program, I created a simple JavaScript web page to illustrate MatCap-based volume rendering. Click here for the Github project and click here for a live demo. You can use the file menu to load NIfTI format images. The control of the transfer texture is pretty rudimentary - program like MRIcroGL and Slicer can provide better control. However, it does illustrate the principle.


This is amazing, thanks a lot for sharing. This seems to be a very simple and efficient way of providing textured appearance without generating 3D textures or 2D texture coordinates. This could be particularly useful when different material is applied to each structure (using multi-volume rendering or a labelmap-based lookup of texture image).

I’m going to try this method in peritoneal carcinomatosis assessing by MDCT (

Thanks to all of you for your help.
I understand that I can use Slicer to transfer CT data to NIfTI format then upload it to JavaScript web page. Am I right?

CT scans have the advantage of having calibrated image intensity, but the disavantage that bone has extreme X-Ray attenuation relative to soft tissue. Therefore, nice CT rendering will require a nicely designed transfer texture. In brief, MRI scans often look nice without a transfer texture, but since CTs are calibrated one can design a transfer texture that works with most CTs. My javascript demo does not allow the user to specify a transfer texture, so while it is a proof of concept, it is not a great solution.

Would you be willing to share a nice modern CT dataset? One can get high quality CT scans at morphosource, but these do not include soft tissue. I do not have access to a modern CT dataset, so it is hard for me to design a CT transfer texture.

I was able to save the CTA-cardio.nrrd volume from Slicer’s SampleData module in nii format and load it in Chris’s web page. Looked really nice. If you adjust the min/max range you can highlight different structures.

Here is what the CTA-cardio image looks like with the MatCaps and a simple transfer texture. I suspect a trasnfer-texture specific pre-filter could improve this.


This is really nice rendering. Would it with well with large datasets from morphosource?

Would it be possible to implement this technique in slicer?

Agreed - nice rendering @Chris_Rorden :+1:

@drouin-simon and I discussed it a bit - VTK needs to have better support for sending down extra transfer function parameters (e.g. the extra 2D texture for the sphere map for the normal vector lookup). Simon has a good idea what’s needed but it would also help of there’s anyone on the Kitware side with similar interests.

This sounds like multi-volume rendering: multiple images used as inputs - some of them are 3D volumes as usual, some of them are 3-slice volumes that contain RGB components of the matcaps texture image. Could we use multi-volume renderer with special shaders to implement the matcaps lookup?

For information, there is currently a PR in VTK that adds Physically Based Rendering.
Michael said:

It is supported on the PolyDataMapper only.
You can add any texture you want (vtkProperty::AddTexture) and sample it in the shader.
Environment reflections (sphere map) and normal mapping will be supported natively by the shader in VTK 9.0 (shaders are only for PolyData mapper though) using multiple textures.

I guess it could be extended to the Image mapper though…

Yes, it would be excellent to share the lighting maps and other infrastructure between polydata and volume rendering.

@lassoan it’s not so much mutli-volume (although it should apply there too) as managing extra 2D textures that store the normal maps etc.

muratmaga Here are my thoughts with regards to the images from MorphoSource. The attached images below show a MorphoSource skull with matcap-modulated volume rendering (left), pure-matcap volume rendering (center) and matcap-based surface rendering (right, Surfice)

  • These are very high resolution images, but typically they only have a single hard surface boundary (e.g. bone vs air). This case seems ideally suited for using mesh-based surface rendering instead of volume rendering. For example, the Surfice function “Convert voxelwise volume to mesh” will convert a voxel-based image to an iso-surface mesh, alternatively you can use Slicer’s interactive Editor (I would suggest saving to any format except STL which is unable to share vertices). For large volumes, volume rendering will tend to be much slower than Surface rendering.
  • The quality of the DICOM headers varies for different images available on MorphoSource. Many convert with any tool without any problems. On the other hand, some lack instance numbers which will disrupt conversion. You can download and compile the developmental branch of dcm2niix from Github for a version that should salvage these images.
  • In my experience, most graphics cards are unable to handle any texture that exceeds 2Gb. So the resolution number of rows x columns x slices x 4 (XxYxZx4) must be less than 2Gb. Slicer’s “Crop Volume” or the “Resize Image (BRAINS)” functions are ideally suited to both crop and rescale volumes.