Retain Image Color in Volume Rendering

Yes, you should be able to color a model’s surface using a color volume using latest Slicer Preview Release. I’ll double-check if it works as expected.

Memory usage at 92% means there is a high chance of running out of memory. If you configure a few ten GB of swap space then it should take care of it.

What is the size and d your data set? What is the scalar type? (these are shown in Volumes module Volume information section)

The first thing I tried was volume rendering, but then I just get a black box, so I’m wondering how I can cut out the black parts.

Scalar type is unsigned char, and 1.2gb of 388 1080x1080 tiff images

This is my dataset : https://1drv.ms/u/s!AprYdPzSEdGHgpo5lgPqkwL-hQdi8g?e=xW1FPa

How did you create these images? Can you just export them as scalar volume? Then you could apply color mapping in Slicer, easily use volume rendering, etc. The problem really is that the color look-up is burnt into the image.

Ah hmm. I’m not sure, I didn’t create them. I’ll ask if that’s possible.

Is there any work around for these images?

If the colors are result of color imaging there is not much to do (then you probably want to use the original color) but if they are result of applying a color map (color look-up table) then it would be better to get the original scalar images.

Anyway, color volume rendering should work, too, as described in this post: Merge colored images and show them as 1 volume

I’ve downloaded your images and used this script to convert it to add an alpha channel and enable direct RGBA volume rendering (just copy-paste it into Slicer’s Python console after you loaded your data set):

# Find loaded vector volume
colorVolume = slicer.mrmlScene.GetFirstNodeByClass("vtkMRMLVectorVolumeNode")

# Convert RGB image to RGBA
luminance = vtk.vtkImageLuminance()
luminance.SetInputConnection(colorVolume.GetImageDataConnection())
append=vtk.vtkImageAppendComponents()
append.AddInputConnection(colorVolume.GetImageDataConnection())
append.AddInputConnection(luminance.GetOutputPort())
append.Update()
colorVolume.SetAndObserveImageData(append.GetOutput())

# Enable volume rendering
volRenLogic = slicer.modules.volumerendering.logic()
displayNode = volRenLogic.CreateDefaultVolumeRenderingNodes(colorVolume)
displayNode.SetVisibility(True)
# Enable direct RGBA color mapping
displayNode.GetVolumePropertyNode().GetVolumeProperty().SetIndependentComponents(0)

After slightly adjusting scalar opacity mapping in Volume rendering module and changing background to black I got this beautiful rendering:

A video created by Screen Capture module:

We plan to release Slicer5 soon and looking for nice images that could demonstrate capabilities of the application. Would you consider allowing this data set to be showcased as an image or video (with proper acknowledgments and reference to publication)?

Oh wow that looks really good, thanks!
I’ll ask at work if they are fine with that, I’ll let you know :slight_smile:

1 Like

I’ve been able to recreate the Volume, which is really cool. Now I am looking to export it of some sort (with the end goal of creating more complex animations)

Therefore I tried to segment it as I read from all the other treads you can’t export a volume rendering, however, is it possible to retain the color of the model in the segmentation?

I have referenced your answers here , yet was unsuccessful in creating a mesh with colors, is it possible to do this with this software?

1 Like

Volume rendering is a display technique, which produces a 2D color picture. There is nothing else that could be “exported”. See more detailed description here.

To display a volume like this, you need to use volume rendering. Blender can do everything, including volume rendering, but of course it is very complicated to achieve something like that is shown above. If you want to try it anyway then you can find some pointers here.

1 Like

Thanks a lot for the info, it is greatly appreciated. The BVTKNodes link you provided looks very interesting. The result of the sample set you provided looks really nice.

I am looking to give it a shot. If doesn’t take too much time, with the programs mentioned being Paraview, the BVTKNodes plugin, and Blender itself, what would be the general workflow here?

You should be able to load the volume directly into Blender and render it there.

1 Like

I have the same question @lassoan: is it possible to retain the color of the model in the segmentation?

I tried to use Probe volume with model, like you explain here, but my results were very “weird”.

This is the volume rendering of the segmentation:

This was the result with “Probe volume with model” > “Direct color mapping”:

Change from “Direct color mapping” to “color table”:

You would need to create a color map similar to the color transfer function that you use for volume rendering, but of course you will never get similar image quality with surface rendering as with volume rendering. This has been discussed previously in other topics, see for example here:

1 Like

Thank you @lassoan for your reply. Could you help me to create this color map? I’m new with 3d slicer :frowning:

For scalar (not RGB) volumes

You can copy the color transfer function to a color node by copy-pasting this into the Python console:

volumeRenderingPropertyNode = slicer.mrmlScene.GetFirstNodeByClass('vtkMRMLVolumePropertyNode')
colorNode = slicer.mrmlScene.AddNewNodeByClass('vtkMRMLProceduralColorNode')
colorNode.GetColorTransferFunction().DeepCopy(volumeRenderingPropertyNode.GetColor())

However, as I wrote above, do not expect surface rendering to even remotely similar to volume rendering, but something like this instead (left: volume rendering; right: surface rendering of probed surface):

The reason is that the texture/discoloration in the volume rendering is due to the cloud of lower or higher intensity voxels around the isosurface value, while in case of surface rendering the discoloration mainly comes from image interpolation artifacts (because if you segment by thresholding then you ideally get a surface where all the points have the exact same scalar value and any difference is due to small interpolation errors).

If you want a little bit more similar results then you can apply some Gaussian smoothing (using Gaussian Blur Image Filter module) to the input volume before you probe the volume with the model (that makes surrounding regions somewhat influence each point’s intensity):

For RGB volumes

If you have RGB volumes then you don’t need a colormap but you use direct color mapping. The fundamental difference between volume rendering/surface rendering still applies, and you’ll basically get a uniformly colored surface if you create segments by thresholding.

You must have chosen a wrong volume when you used Probe volume with model (not the RGB volume) if it came out like that in your screenshot. Try the probing again and if you cannot figure out what’s wrong then upload your scene as a .mrb file somewhere and post the link here.

I get this with the default red-green-blue colormap:

And this is what I see when I switch to direct mapping:

I’m not sure what your goal is, but I don’t think you can get realistic surface textures from the color in these cross-sectional images. If you want to see nice surface texture then it is better to take a photo of dissected organs and apply that texture to surface models.

1 Like

Wow, thank you so much @lassoan ! Your answer will help me a lot, thanks again!

1 Like

I tried to use the Probe volume with model but it didn’t work :frowning: . I used the RGBA volume to Probe volume with the model (with direct color mapping) and I choose my segmented model. My goal is to try to export my segmented models with the RGB volume as texture.

Here is the link to access the .mrb file: https://drive.google.com/drive/folders/129_MfWNBdJIzO6D4qiiVZVZXhXQmKKmw?usp=sharing

I’ve checked the scene. The problem was that the transform was not hardened on the rgb volume and CLI modules, such as “Probe volume with model” do not take into account transforms that are dynamically applied to a node. You can harden a transform in Data module by right-clicking on the icon in the transform column.

1 Like

Nice! I reach that result too! Thank you very much @lassoan :slight_smile:

1 Like

@lassoan when I try to save this model by clicking at “Save” and select “OBJ” I get this result with MeshLab:

Also, when I duplicated the model, disable the Scalars and stay both visibilities I get this result at 3D Slicer:

For me, this model is pretty good (considering the segmentation). My question is: can I export exactly this model as OBJ?