How to perform 3D-cinematic rendering?

I just meant that the multi-volume GPU raycast mapper can already handle multiple image inputs, so that infrastructure may be usable for passing additional texture images to the volume renderer. I agree that it would be nicer to have a standard, dedicated interface in volume properties to store texture images.

Thanks for the pictures Chris. Yes, a lot of the images in MorphoSource comes from scans of specimens in osteological collection, and will not have any soft tissue to begin with. I also noticed the issues with DICOM headers. In my case,
Slicer’s DICOM patcher worked fine fixing the issues, but dcm2niix definitely an option.

I have done some test to figure out HW limitations for GPU rendering in Slicer. I don’t know if that applies to the MatCaps/your software, but for VTK GPU raycasting the important parameter is the cards MAX_3D_TEXTURE_SIZE Opengl capability
of the cards. Almost all ATI cards I have looked capped at 2K, whereas new NVIDIA geforces support 16K, and even up to 32K at new Quadro’s. The second requirement is to fit the entire volume in GPU’s memory.

But still these very good images.

Mevislab has excellent Cinematic Rendering. MeVisLab: Download Relatively easy to use as well. I have created a project file that does sculpting, and saving/loading of the cropped dataset and made crop box slider bars. Mevislab is visual network based programming - very easy to learn. I have made up a few colour transfer functions as well. Email me on derlang29@bigpond.com if you are interested in getting my working project file. I don’t work for Mevislab.

3 Likes

Cool images! Agreed, Mevislab is excellent. For those who don’t know, Slicer’s python environment relies heavily on PythonQt from the Mevis team.

A few years ago when I was visiting Bremen we did an experiment hooking Slicer to Mevislab (something like the MATLABBridge) so there’s a lot of interoperability potential if people want to go down that path. People should of course be aware of the commercial licensing and regulatory considerations (Mevislab is free for non-commercial use, while the pro version (paid) is supported for medical device use).

1 Like

I got a quote recently. The non commercial full version is $3000 euros and the commercial full version is $6000. The non commercial non full version is free. All my images were generated using the free non commercial non full version.

1 Like

I’m not sure how useful such artistic (plastic-looking) renderings are, but they definitely look nice.

How long does the rendering take?

Have you experimented with open-source photorealistic rendering engines, too?

Capabilities of MeVisLab are impressive but licensing fee is a nuisance and its restrictive license makes it impossible to use it in collaborative research (you cannot freely modify, enhance, and redistribute the library).

I did some experiments a few years ago with lux and you can do some really nice things like the example below. But most of the fancy renderers are surface-oriented and if they handle volumes at all it’s for like clouds or flame.

Also they are very slow even gpu accelerated. They will basically run forever tracing light paths and you decide when the picture is good enough.

I think if we are going to invest time it’s better to try improving the existing volume rendering pipelines like we’ve been doing (although it would be great if there were more open source options).

Each image progressively renders until it becomes high quality. You can set the number of iterations in the program. On average it takes 3-5 seconds to produce an image of good quality and 5-7 seconds for a very good image.

The images are better than regular volume rendering and ray casting in that it provides our hospital surgeons with better depth perception and it looks realistic just like a cadaver. It has been extremely useful in the hospital I work at. The feedback I have been given is that it is much easier to understand the 3D images performed with cinematic rendering. Interestingly I am also told that the images generated are less plastic looking and more life like.

The last rendering engine I used was VTK’s library. I also used Mitsubishi’s (now called Terarecon) VolumePro 1000 hardware accelerated ray casting chips. https://www.vision-systems.com/non-factory/security-surveillance-transportation/article/16744172/terarecon-launches-family-of-realtime-3d-visualization-products.

In most “cinematic rendering” images that I’ve seen, shadows are very strong. I guess it is probably because the main feature of this rendering mode is more realistic shadows. What I find very odd that normally surgeons don’t want this kind of dramatic lighting conditions (harsh shadows, contoured lighting) in the operating room. They use flood lights and head lights to minimize shadows. So, why do they accept or even claim to be better to have shadows?

I understand that shadows can certainly improve depth perception, but casting shadows (decreasing visibility) in certain parts of the image has the risk of making potentially important details less clearly visible. There are cleaner, more direct ways of improving depth perception, such as virtual reality with its immersive stereo rendering, which provides more depth cues by disparity and motion parallax.

Long rendering time is an important problem, too. Several-second rendering time (which is about 100x slower than usual) makes “cinematic rendering” unsuitable for a wide range of applications, such as any interactive exploration of the volume, surgical guidance, virtual reality, etc.

It would be nice to be able to play with photorealistic rendering options to understand if shadows are good or bad after all, compare it to virtual reality, etc. Is there a sample application that you can share with us to play with (that does not require us to build anything or sign any license agreement)?

1 Like

Just download the free noncommercial version of Mevislab. I can then email you my project file. Its only a rudimentary program with some minor bugs.

Yes you are right in that harsh shadows are not useful. The cinematic renderings I show to the surgeons (vascular and orthopaedic) don’t have strong shadows so nothing is missed.

Apparently Mevislab has a module for VR as well. But I think augmented reality is better so the surgeons can use it in the OR like the Microsoft Hololens and see a superimposed 3D image in the patient.

Current hardware for rendering is on a Nvidia 2080Ti.

I created a new tomography’ shader that is included with the latest release of MRIcroGL. It is nice to have a selection of free tools for the job. Slicer provides a huge amount of flexibility. MRIcroGL generates reasonably interactive, reasonably pretty volume renderings on typical laptops for typically sized volumes. Mevislab can create stunning renderings but high quality is not interactive and requires high end hardware.

I do not have access to many CT images, nor have experience with CT color tables. If anyone wants to make suggestions or improvements I would like to hear about them.

2 Likes