2021.01.05 Hangout

Tomorrow, we will be having our next weekly hangout at 10:00 AM ET until 11 AM ET.

Anyone is welcome to join to ask questions at https://bit.ly/slicer-googlemeet-hosted-by-kitware


Feel free to post to this thread to request/suggest a topic!

Sam and J-Christophe

I’d like to touch base on volume rendering of segmentations too if we have time (rendering vector volumes with independent components off).

@Sankhesh_Jhaveri will join us at 10.30am to participate to this discussion

I would like to ask a question regarding the code style for python modules. The samples and the source code differ from the style guide in the wiki. Is there some sort of a linter config for developing python modules for Slicer? If not, should there be one?

That’s great! Let me give a bit of background in case anyone wants to look in advance. The basic idea is to render segmentations using a technique like the one described here but with the reference volume as the alpha channel.

It works pretty well, and the SlicerMorph team really wants this mode, so one thing I want to do is speed up calculation of these. I’m sure it can be made to work in real time for feedback while segmenting, with per-segment visibility controls, etc.

The problem I’m hitting is that the ray caster seems to use the alpha channel for opacity integration, but for some reason uses the red channel to calculate the gradient for lighting, leading to pixelation artifacts and incorrect normals (e.g. no normals in blue areas).

I’ve started working on a pure vtk script to reproduce the issue but wanted to discuss with this group before filing a bug report. Maybe I’m using it wrong or it’s already fixed in VTK master or someone’s branch.

The bottom ellipsoid is a single component volume and the top is rgba with the same alpha as the single component and red, green, blue slabs

I suspect this issue is on this line but changing the 0 to a 3 does not work as I expected (color becomes black everywhere).

I’ve tried to use this for segmentations, but the problem is that you need to blur the alpha channel if you want to see surfaces (otherwise you would not see the surface normal directions but the flat voxel side directions). However, if you blur the alpha channel then at the boundary you get voxels that have non-zero opacity but background color. This would add something like a dark haze around the segments. You could remove this by expanding the labelmap values a bit, but then it would mean that you need to maintain a separate labelmap (extra memory, extra expand step each time the input labelmap changed). This could be solved by using multi-volume rendering, but that would only work if you have a handful of segments, and definitely not work if you have hundreds of segments.

The ideal solution could be something like a custom volume rendering shader, which would compute the smoothed normal from a labelmap volume (for example by Gaussian smoothing; so that you would not need to update the smoothed alpha channel after each segment editing operation) and would find out the voxel color from the nearest (or median) label value.

I agree, a custom shader could solve the more general problem of rendering segmentations and that’s not out of the question if there were a motivating need. I do have a prototype of such a shader in webgl. There are many other nice features that could be added like handling nonlinear transforms in the ray caster. But VTK’s gpu code is pretty complex and so solving the general problem is nontrivial.

But for the SlicerMorph use case it’s almost always bones in microCT, so the alpha channel can come from the CT and not from the segments. The segments only provide color. An example use case would be to independently color each of the bones in a fish or regions of the skull, like in the mouse example. So in the end it’s just scalar opacity / gradient opacity rendering but colored by the segments.

And yes, in the mouse example I grew the margin around the segments so that the non-zero alpha around the segments blends with the correct color to avoid the halo. Calculating that margin operation shouldn’t be a problem. Updating such an rgba volume should be way faster than building a surface model and would have basically constant time even as the complexity of the segmentation grows. Multivolume rendering is not needed for this.

The dependent components volume rendering in vtk is already very close, other than the bug I described above which prevents nice rendering with shading enabled.

For the curious, @jcfr mentioned about the possibility to add a git local repo as a remote repo to have like two working branches and push/pull from one to another. There is the concept of git worktrees (https://git-scm.com/docs/git-worktree), which pivots around a similar idea (https://levelup.gitconnected.com/git-worktrees-the-best-git-feature-youve-never-heard-of-9cd21df67baf). Cheers.