CpuRaycasting too pixelated for small voxel data

We will be running our SlicerMOrph workshop using cloud resources, which doesn’t have a GPU. I have been testing CPUraycasting. While large voxel spacing rendering quality is good. For small voxel data seems very noisy. This seems to replicate both in the stable and the preview. Is there a solution for this?
CPU:
image

same with GPU
image

same volume with CPU rendering spacing increased by 10X (0.035 to 0.35mm)
image

@lassoan @pieper

Unfortunately there probably are some ray casting heuristics that don’t work well across scales.

For many environments you can enable GPU even if there’s no hardware and the software fallback will still work - I take it this didn’t work for you on your virtual system?

For example GPU ray cast mode works in this docker even though it’s not hardware:
docker run --rm -p 8080:8080 stevepieper/slicer-morph:4.11.20200930 connecting to localhost:8080

Not sure if there’s a short-term fix otherwise, but if you post the sample data we could determine if this is a Slicer thing or is reproducible in native VTK.

It works ok up to a limit. For large datasets, GPU software rendering is way too slow. Performance is much better on CPU, when you have enough cores.

It is the sample data we distribute with slicermorph: SampleData/sample_Skyscan_mCT_reconstruction.zip at master · SlicerMorph/SampleData · GitHub it is 0.035mm isotropic voxel.

The CPU volume renderer in VTK does not take into account the user matrix (that we use to position, orient, and scale the volume). It would be hard to change this, because it is a complex mechanism and the CPU renderer is not developed anymore in VTK. However, we can work around this by passing just the scaling to the volume renderer, and use the user matrix only for positioning and scaling. I’ll send a pull request with this workaround.

There is a difference between the GPU and CPU volume renderer in that after scaling, the CPU-rendered volume becomes more transparent (as it should), while the GPU-rendered version does not. Probably the GPU volume renderer transfer function evaluation code does not take into account the user transform when evaluating the transfer functions, which may be justified if we consider user transform as a way to position/scale the rendered object (in that case, the same workaround should be applied to it as for the CPU renderer).

Note that the GPU volume renderer has already many more features than the CPU volume renderer (for example, custom shaders), and the gap will get even wider in the future.

Thanks Andras. I will try when you can fix the issue.

Unfortunately, economically it is 3-4 times as expensive to have the same cloud server with the GPU (and the only thing GPU is used for rendering, which is a small but important part of the course). Yes, normally we always use the GPU rendering (hence never was aware of this issue until now).

I’ve pushed the fix. It will be in the Slicer Preview Release tomorrow. It would be great if you could test it (I wanted to give it some more testing and send a pull request, but accidentally pushed it directly to the master).

1 Like