I finally had my hands on a oculus rift system and tried it with Slicer. I have couple questions.
I used the MRhead as a Volume Rendering and interactions with it was very fluid. But I can’t seem to do anything else beyond moving/rotating/scaling or clipping through slices. Is that normal? E.g. can I do landmarking, or adjust volume rendering settings through the headset?
I enabled 3D rendering from a thresholded segment of MRHead. It was very slow to interact, but more importantly stuttered/flickered (I don’t know how to describe)
My knowledge of VR is systems is non-existent to minimal, so guidance would be helpful. I am using the Windows preview from 1/20 and the GPU is RTX Titan.
Yes, it is not obvious just how much you can achieve in virtual reality, because we cannot yet display regular GUI widgets (buttons, sliders, etc.) so you need to activate features in the desktop GUI. We’ll be able to show widgets after we upgrade to VTK9. Until then you can use position of any object to set any parameters anywhere with a short Python script (you add an observer to the object’s parent transform, and in the callback function update the chosen parameter, for example volume rendering transfer function; it takes just a few lines of Python code).
When we implemented markups placement in virtual reality, we realized that we needed to add support for multiple active markup points (desktop user can hover over a markup control point, while virtual reality user can hove over/grab two other control points), so we implemented this feature, but we ran out of time finishing the landmarking. I think a new student will start to work with @cpinter tomorrow who will work on finishing this (and maybe on the immersive widget display, too).
For some reason, segmentations are very slow to move (we haven’t debugged this, as we did not need it). To resolve this, right-click on the segmentation to export it to model and hide the segmentation node.