We can use any imaging modality, as the background opacity is automatically reduced (you can choose by how much). It would be nice if someone could post rendering of some segmented MRIs.
I agree, this could be a nice improvement on the state of the art surface rendering using fake textures (designers download some textures and normal/bump maps and apply it on the models to make them look more “realistic”).
Volume rendering is still better in that you can see through thick semi-transparent objects, but I admit that usually it is not necessary to see very deeply inside or through objects.
Somewhat related work is that we can now use the depth buffer to improve lighting in volume rendering - see Screen-space ambient occlusion for volume rendering - #26 by lassoan. This depth information could be captured in a bump map.
Interesting. Are point clouds or Gaussian splatting faster or simpler to do on current GPU hardware than raycasting?
Click Default
for lighting, disable ambient shadows, and click None
for image-based lighting. That said, there might be some things that we don’t reset exactly - that’s why Lights module is still in the sandbox. We’ll move features into Slicer core as we understand what features are useful enough and what is the best way to use them.
If you mean that you choose a different volume for output than your input volume then that is not a workaround but that is how the module is intended to be used. If you use a different input and output volume and you still see some drift then take a screen capture video (or take a screenshot and describe every click you make).
Your coronary rendering looks very nice!