Volume rendering colorized with segmentation

I’d like to have a follow up chat about segmentation rendering. I found the vtk shader bug we discussed before so now shading can be made to work (no PR yet, but it’s a small set of changes).

3 Likes

This rendering looks very nice!

It should not be hard to add this as a new volume rendering option. This is required for photorealistic volume rendering (a.k.a. “cinematic rendering”), too - there the RGB array is filled in quickly and approximately using AI-based segmentation, making vessels appear red, bone off-white, soft-tissues brown, etc.

It would be nice to use this in the segmentation displayable manager, too. For this we should have the option of generating opacity volume from the segmentation’s binary labelmap and compute smooth gradients. Smoothing the binary labelmap would be easy, but slow and it would double the memory usage. It would be better to improve the GPU raycaster’s gradient computation to use a small local neighborhood. Eventually, binary labelmap volume rendering as RGB should be a built-in feature of the GPU volume raycast mapper.

Do you use the GPU volume mapper’s labelmap masking features (specify a lookup table for input labelmap mask and mask blend factor controls how much coloring is applied from the mask) or you generate an RGBA volume from scratch?

This one was done by generating the RGBA volume and then rendering withe the GPU ray cast (once the shading was fixed).

Yes, I think the segmentation rendering should all be done in the GPU code.

1 Like

It would be nice if you could try if setting the segmentation as labelmap mask for the GPU mapper provides the same results. The advantage would be that we would not need to generate an RGBA volume (faster updates, less memory usage).

This is great :pray:

Cc: @Sankhesh_Jhaveri

Yes, exactly.

This image uses the CT as alpha, so you get nice detail with the coloring, but we should also do the option where the segmentation opacity controls the alpha channel. We’ll need to do the local smoothing / surface fitting in the GPU using the segmentation and the color from the lookup table. This should be effectively the same as building a surface model but faster and with some extra options.

I agree we should think about how we should handle volume rendering from a UI perspective.

I would say this is almost a must for this to be a meaningful replacement of existing segmentation rendering, and creating complex visualization from them.

1 Like

I’ve tried to colorize volume rendering using TotalSegmentator’s segments (see implementation), but the result looks quite bad:

The main issue is that I had to boost the opacity inside segments but then the boundary becomes very sharp/blocky. Applying smoothing removes the blockiness but also removes intricate details of the surface. Even if I don’t boost opacity (just show bones and contrasted vessels), the surface is still blocky because the RGB values have an abrupt change at the segment boundary.

@pieper What was the VTK shader bug? Did the fix make it into VTK? In your mouse skull rendering above it seems that you managed to just change the RGB color but kept the original surface details (opacity and surface gradient) - did you have to apply any thresholding, masking, dilation?

1 Like

The fix is in this post. No, I don’t think it was ever merged.

The code has been rearranged but it looks like the bug is still there but I didn’t get back to it.

Yes, sort of. What I did was to use the CT volume as the alpha channel directly, and that’s why the bone detail is so crisp. Then for the RGB I did dilate the segmentation with a Voronoi-like distance function so that each voxel is the color of the nearest segment. This way the ray samples around the segments get the correct color even when they make small opacity contributions and anything below the threshold is ignored.

I did some of this by hand and didn’t finish the code but there’s some work in progress here.

I didn’t do this but I thought about also changing the transfer function based on the nearest segment, so that, for example, even if a segment is in an area of low signal intensity it could still be made more opaque. That is, allowing something closer to an isosurface rendering while retaining the image details.

It would definitely be fun to finish this up and make it available.

Thank you, this is very useful. Is component 0 correspond to the alpha channel in the shader code? (in CPU memory the component order is RGBA)

With the shader code change, the results look different, a bit better, but I had to dilate the segments and there is still problem with if I don’t boost the opacity of segments that have low intensity:

1 Like

I’d have to go back and look - I remember the logic was odd.

Yes, that’s what I meant by customizing the transfer function or alpha value based on the image values of the underlying volume.

Is your gist code up to date?

I’ve updated the gist and uploaded the test scene here. I experimented with applying Margin effect - expanding all segments expand by 3mm (not quite Voronoi but should be OK for a quick test) - but it did not make a big difference. With or without margin growing, there are still those dark artifacts and blurriness in the rendering.

I still think this is better than both the surface rendering of segmentation (lower left), and its labelmap (lower right). But the background and intestines are too close and removing the background seems to remove the intestines as well.

For me the real benefit of RGBA is to replace the slow 3D model based rendering of segmentation. So in that specific case, it would be perhaps possible to mask out the background values (i.e values not assigned to a segment?)

Here it is with the background masked out (same ordering):

I think what we miss for a good-quality volume rendering is smooth surface normal vectors. Surface normals may come from the original image, but it we only get surface normals in regions that have much higher intensity compared to surroundings, which are typically only bones and contrast material filled regions. There can be segment boundaries that do not have any corresponding change in input image intensity, so in these cases no matter how we try, we cannot get surface normals from the original image.

Therefore, at least in some boundaries, we would need to estimate the surface normal from the segmentation. If the direction estimation is done simply on the binary image then we get the blocky appearance as visible in the lower-right image. Masking causes sudden jump in the intensity similarly to a binary image, which causes the blockiness in the top image, too. If we apply smoothing on the binary segment or masked image (so that the change is intensity jump is spread over a few voxels) then the normals are very nice and smooth. Unfortunately, smoothing is computationally expensive, removes some relevant details, and it is not clear how can be performed on a labelmap that contains many labels.

Maybe the solution would be to use some higher-order method for gradient estimation (such as this) if the input is a labelmap volume. Currently, a very simple central differences based gradient estimation is used in the volume rendering shader, which only gives acceptable results if the voxel intensity is continuous (results in blocky appearance if voxel values are discrete labels).

My take on this :wink:

This kind of hybrid display may help speed up the review of individual cases.

The Shift slider in Volume rendering is too sensitive, and it would be great if it would be a ctkRangeWidget to enable entering range numbers …

image

Hi,
you can adjust the sensitivity in the advanced tab in the section with the transfer function. P.S. the shift refers to shifting the transfer function.
Best
Ron

Thanks @rkikinis ,

Still quite difficult to hit the sweet spot

unless I oversee something …

First set the slider below scalar opacity mapping. Then use the shift slider

1 Like