I’m working on creating some form of “heatmap” on an existing volume. At the moment, I create 3D gaussian kernels of some value range (that I can target easily with the scalar color mapping) at specific coordinates I have beforehand. I do this by taking that part of the volume through numpy, setting the values to my kernel (where my kernel is not 0) and then do slicer.util.updateVolumeFromArray(nodeVolume, volumeNumpyArray) to update my volume node.
The replacement works just fine, however the visual results are not really what I expect.
Below you can see a heatmap of the 3D kernels that I use, which look as I expect. (I am using the “hot” color mapping from matplotlib here)
with a color mapping that looks like this (the entire color map has opacity set to 1):
The inside of the spheres themselves is properly colored (when I zoom in I can see the proper distribution on the inside of the spheres), however I expect the outside here to be fully green.
(This matters as in the end I don’t want actual spheres, they are there to just generate a distribution for the color values. My actual goal it to get some form of heatmap overlay over an existing surface, while still trying to keep most of the physical detail from that surface)
(When changing the Interpolation to nearest neighbours I get more or less what I want, however the quality is quite bad).
Is there anyway to make this behave normally?
Is this already normal behaviour and am I missunderstanding what to expect from volume rendering?
Is there anyway to get better interpolation (without implementing it myself)?
Should I take another approach to get my desired heatmap result?
Any insight would be appreciated. Thank you :]
As an addition, this seems to happen when any very distinctive color is part of the color mapping, as with the full volume, even on the outer surface I get something like this:
This behavior is expected whenever neighbor voxels have vastly different color and opacity is close to 100%. This is because there is no longer an option in VTK volume renderer to choose between “interpolate first” and “classify first” during ray traversal. It is always “interpolate first” (compute interpolated voxel value and then look up the corresponding color and opacity in the transfer functions).
This means that if you have voxel value of 0 = green, next to voxel value of 1000 = red then you will have sample values like 0, 0, 0, 312, 544, 753, 1000, 1000, 1000 etc. along the ray. Since the opacity of 1000 corresponds to 100%, the first few samples will determine the color, so the result will be greenish-red.
If you want to use 100% opacity = isosurface rendering then one solution is to run a contour filter and render as surface. You can enabl “Surface smoothing” in volume rendering options to randomiz the ray sampling point positions to woodgrain pattern and have a more random distribution. There are many other solutions. If you tell a bit more about your use case then we can give more specific advice.
My end goal is to try and generate a heatmap based on specific coordinates on a brain segmentation. Something that defines a gradient color from the center coordinate and dissipates toward the edges, preferrably in a different color.
I already took a look at previous posts suggesting the use of segmentations, however that doesn’t offer the preservation of detail that I want here (trying to still have the brain ridges be recognisable through the heatmap).
The best case scenario would be some functionality that allows me to change the colors going into the ray equation (to give the existing color just a tint of the calculated values from my heatmap), but I think that’s likely not possible.
To give a more specific example, I am trying to achieve a more advanced version of this:
You can render as a colored surface: export the segmentation to a model and assign colors to model points from the heatmap volume using “Probe volume with model” module.
If you don’t have a good-quality surface (so you would rather display it using volume rendering) then you can follow the method that is used in “Colorize volume” module: keep using your scalar volume as alpha channel and use your heatmap as color channels. Then display this RGBA volume using volume rendering.
That is indeed part of my problem here, the surfaces that I am working with could definitely be a lot better. I don’t mind the cubic results I get for my spheres, as I expect them to look like that on their own. However at the end I plan to crop out the excess that is not inside my main volume.
I will clean up my scripts and send them here once ready.
I didn’t know about the SandBox extension yet, and the description of the Colorize volume model sounds great. In my head that’s what I was trying to achieve, set the volume as alpha and then use the heatmap for color channels, but I will check how successful I am with this module and get back to you. Thank you very much for the suggestion.
This is currently what I am running in slicerrc for quick iteration.
It doesn’t have the masking part that I plan to add for the final version of what I am doing, however it is enough to see the rendering issues.
For the main volume mention in the script, it’s fine to use the sample MRHead MRI and use that as main volume. The points in the markup file should be points that are somewhere around the brains surface (around the skin is also fine to see the same issues).
At the moment I also edited the scalar color map once manually, saved it, and then loaded through the script every other time (however will have it come from an outside color map in the future)
To make it easier to test and discuss this code, please make it fully self contained, in the sense that it downloads sample data and hard-codes any markups or other data so it can easily be copy-pasted to reproduce.
Also please put it in a github repo or gist so iterations and comments can be tracked.