In medical imaging, photorealistic volume rendering is advertised as “cinematic rendering”. There are mixed results in using it for diagnostics (complex lighting may improve depth perception and even visibility of some structures in certain circumstances, but it may also make it harder to see some details). However, more realistic rendering would be very welcome in medical education and in communication between patients and physicians. We (and I guess many other) users would be very interested in trying these techniques.
Same interface as regular volume rendering (presets, transfer function editor, quality/speed controls, etc.), shadows enable/disable, and light editing (probably lightkit presets, maybe tuning of position of key light and intensity of filling lights exposed on the GUI). We would also like to use multi-volume rendering.
For medical image visualization (human and animal) Slicer is probably the best platform for many reasons. The infrastructure is already in place, only little work is needed to expose new volume rendering features on the GUI. The main reason we have not considered adding OSPray support to Slicer yet is because of the long rendering time for good-quality images (see details below) - but if there is a chance for fast options then we should work together to make it available for users.
Current experience with OSPray:
With default settings in ParaView, speed is borderline tolerable but there are significant rendering artifacts compared to baseline GPU raycasting.
GPU raycasting (baseline, non-photorealistic):
OSPray (default settings: samples per pixel = 1):
Increasing samples per pixel value removes many artifacts but surfaces still look quite messy at high-gradient areas (maybe sampling distance is not small enough? but I did not find how to set that in ParaView) and rendering is not interactive anymore (rendering may take 5-20 seconds).
OSPray (default settings: samples per pixel = 15):