Mixed volume rendering and surface representation of segmentation has incorrect 3D appearance if segment is not fully opaque

I am using volume rendering to view the air/non-air interface in a CT volume (only voxel values -600 to -300 HU are opaque, all others are transparent). In addition, I have segmented a tube which is in the airway, and created a closed surface representation of that segment which is visible in the 3D view. As long as the segment is fully opaque, the rendering of the 3D view appears correct, like this:

However, if the opacity of the tube segment is reduced at all, it is always rendered as if it were behind the volume rendering of the image volume data, like this:

This is clearly a rendering error. As the 3D view is rotated clearly physically impossible things happen and it is impossible to see any portion of the tube appear in front of any portion of the rendered image volume data.

Reducing the opacity of the image volume data does not mitigate the layering problem, the tube segment is eventually visible through the image volume data, but still always appears to be behind it, even when it should be in front.

The problem does not seem to be affected by changing the Rendering setting in the Volume Rendering module between the CPU and GPU choices.

I assume this comes from a bug in the rendering code when it needs to handle mixing the image volume rendering and a closed surface representation, but it is well beyond my ability to figure out how to fix this bug.

Currently, I will work around the issue by always leaving my closed surface object fully opaque, but it would be nice to be able to use a partially transparent representation as well in the future.

To render semi-transparent models correctly, you need to enable depth peeling.

image

You can enable depth peeling by default in menu: Edit / Application settings / Views / 3D viewer defaults / Use depth peeling - then restart Slicer.

Thanks, I hadn’t noticed this option before. However, this results in the segmented tube being semi-transparently rendered in front of the volume rendered image volume. Looks like this with depth peeling on:

(The tube should disappear from view as it enters the nose, hidden by the rendered image volume, as in the very first image attached above for the fully opaque image)

So, this changes, but does not resolve the issue. Thanks for the response, I very much appreciate it.

GPU ray casting should work well with depth peeling and usually it is also magnitudes faster than CPU ray casting.

If you use a step-function-like scalar opacity transfer function in the volume renderer (as you do in the attached screenshot) then you may just as well apply the same threshold range in segment editor to create an opaque surface mesh. You can then visualize everything without the need for volume rendering.

1 Like

Thanks again. It does work with GPU ray casting and depth peeling. I will also experiment with using surface models instead of volume rendering. The appeal of the volume rendering was the ease of working with 4D data, as the volume is cycled through the time steps, the volume-rendered representation naturally changes with the image volume. To do the same with surface models, I need to generate surface models for each frame (of which there are 39 in this case), assemble them into a sequence, and put the sequence into the same sequence browser. These are all doable steps, and I am going to need to do them for the tube anyway (which needs to be segmented), but there was significant appeal in just keeping the volume rendering for the air interface, which works out of the box with the preset I created and automatically handles 4D displays. Also, I liked that it was easy to create a representation where it was trivial to tell which side of the surface was facing air, and which side was facing non-air by putting two different colors on either end of my step function. I think to do the same using surface models I would need to generate two surfaces slightly apart from one another, and depending on how they got smoothed, etc., they might accidentally intersect, and there would also be a visible gap between them if I tried to use clipping planes to provide similar functionality to the clipping ROI in the volume rendering module, not to mention having to synchronize the clipping planes between the two surface models. All of that is much easier if I can stick with the volume rendering module, which is very easy and intuitive to use.

Thanks again for your help. Slicer is an amazing tool, and the community support is second to none!

2 Likes