Questions about Virtual Reality Implementation on Slicer

Hi, I have a few questions about Virtual Reality Implementation on Slicer

  1. I have a volume rendering that will not appear on Virtual Reality even though it appears on my 3D views. Why is this so and how do I fix it?
    Ex: Perspective from Slicer Desktop:
    image

Perspective from Slicer Virtual Reality:
image

  1. Is there any way to implement segmentation via Virtual Reality (ex: using the remote to segment)?

  2. I cannot see my Oculus touch controllers in Slicer, making it difficult to move around segmentations. Is there a setting I can change to make them visible?

  1. I had issues like this in past versions (4.9) when the display node of the volume rendering did not have the VR view included (VR view node ID was not in ViewNodeIDs). Try to create the scene from scratch in 4.10.1.

  2. We want this a lot too. The main task that needs to be done first is to allow using Qt GUIs in VR in an easy way (see this ticket). Weā€™re planning to work on this over the summer. Contributions are welcome! Let us know if youā€™re considering participating in this effort.

  3. The ControllerModelsVisible flag in vtkMRMLVirtualRealityViewNode controls the visibility. You can turn it on/off from the GUI using the ā€œControllers visibleā€ checkbox in the Virtual Reality module. Again, please use 4.10.1.

I hope this helps!

Thanks for the prompt reply.
2) I will continue to follow the development of the VR segmentation tool. Sounds very cool and would be very helpful!

In regards to 1) and 3), I am currently working on slicer 4.10.1 but I still do not see any virtual reality view of the controller models (after checking off the ā€œControllers visible checkboxā€) nor the volume rendering, even after restarting the app and recreating the scene. I have also downloaded the latest nightly version of Slicer 4.11.0 and still do not see the volume rendering nor controller models. Am I missing a step?

Make sure you updated to the latest version of SlicerVirtualReality extension (you can check for updates and then install updates in the extension manager).

Controller display works fine for me for Windows Mixed Reality headsets and HTC Vive. I donā€™t have access to an Oculus Rift, but if for any reason you donā€™t see the controllers, you can always show where the controller is by enabling controller transforms and then applying the transforms to any models.

Volume rendering works fine for me for Windows Mixed Reality headsets and HTC Vive. Can you share an example scene that has volume rendering that does not show up in virtual reality?

I have attached an example scene in gdocs:
https://drive.google.com/file/d/16FwaOCQseLVeWhtq9ytA9ofcI-g331ej/view?usp=sharing

How do I enable the controller transforms? They are visible on slicer as seen below:
image
, however turning on and off the view does nothing. If I apply the controller transform to the models I am still unable to see the controllers.

Thanks for the help.

In your scene, Volume Rendering is hidden in the virtual reality view. You can enable it in Volume rendering module / Inputs section / View:

image

As I wrote above, enable controller transforms (they appear in the scene, so youā€™ve done this correctly), but then donā€™t click on the eye icon of the transform (that would visualize the transform if you specify a region of interest), instead load any model (STL, OBJ, ā€¦ file) that you would like to use as controller model and apply the controller transform to that.

By the way, is there a particular reason you chose an Oculus Rift headset? If you need high quality (absolute tracking, additional controllers, high resolution, etc.) then HTC Vive headsets are better, if you need something inexpensive or easily portable (no need for external trackers) then Windows Mixed Reality headsets are better.

1 Like

Thank you so much for all the help! The reason Iā€™m using Oculus is because Iā€™m borrowing it temporarily from someone else to explore the virtual reality feature on slicer.

Hi, sorry I had another question about the views. Is there a way through Python to automatically set views to include the ā€œvirtual reality viewā€? Iā€™ve attempted to do it with getting the volume rendering node:
volumeRendering = slicer.mrmlScene.GetNodesByName(ā€˜GPURayCastVolumeRenderingā€™).GetItemAsObject(0)

and setting the view of the volume rendering node like so:
volumeRendering.SetViewNodeIDs(vrViewNodeID)

however, doing so does nothing. Iā€™ve also noticed the vrViewNodeID is different from a regular view node ID (vtkMRMLVirtualRealityViewNode vs. vtkMRMLViewNode). Is there anyway to do this?

This should not work, and you probably got an error as well. Instead, call AddViewNodeID

1 Like

Thanks for the help,

Just wanted to write an update for this that I wrote a small work around for the ā€œeraser effectā€ in Virtual Reality (in the meantime before you integrate the Qt GUIs in VR). Through this code, a sphere segmentation is created in VR that gets transformed to the right hand controller. Then, when I move the sphere around and it intersects with a part of a segmentation that needs to be erased (ie the spine) , I press a button on the keyboard that erases it (by using ā€œsubtractā€ in the ā€œlogical operatorsā€ effect). What I am wondering is that every time I press this button on the keyboard the Virtual Reality screen goes black for 1-3 seconds before an altered scene is shown. Is there anyway to fix this or is this inevitable as Slicer is running background processes?

Thanks!

The problem should go away if the segmentation is updated more quickly. For example, you can disable smoothing (checkbox in ā€œShow 3Dā€ buttonā€™s drop-down menu) to make 3D view updates magnitudes faster.

1 Like

A post was split to a new topic: Segmentation operation takes too long for virtual reality

We are looking to be able to control the ROI in VR with controllers. Has anyone been able to figure out visualizing and controlling the ROI in VR space?

You can move the ROI without any programming, by adding a model that the user can grab and set the same parent transform for the model and the ROI.

If you also need to change the ROI size with the controllers then one option is to add a few models to the scene (each serve as handles that can be grabbed and moved using the controllers) and implement a small python script that adjust the ROI position/size based on parent transforms of these handles.

Hi, this is for @lassoan and @cpinter:
Are you familiar with Tom Goddardā€™s ChimeraX software? Itā€™s an Open Source software designed for chemical purposes, but it can handle DICOM images (including some limited Volume Rendering and Surface features) and has the best virtual reality module Iā€™ve seen so far. ChimeraX can somehow display floating standard menus in the VR environment, so you can access any command you want, and interact with them in real time inside VR (including configuration of Volume Render histograms, etc.). Is it possible to reach this kind of staff in 3D Slicer in the near future?
ChimeraX has very limited Volume Render and Surface Segmentation capabilities, I imagine a sort of mutual collaboration to improve both developmentsā€¦ what you think?

These features are already available in VTKā€™s virtual reality interface and we are ready to take advantage of it. We just need to update to latest VTK version, which is taking longer than anticipated (it is a work in progress, see the list of open issues here). I would expect that the update will be completed in a couple of weeks. After that it should not take more than a few weeks of work to make the menus available in virtual reality, but Iā€™m not sure when the work will start. @cpinter and I have always several projects to work on, but may still find time. Things can go much faster if you can join the efforts or anyone else show up who would be interested to contribute.

Slicerā€™s virtual reality support is actually very powerful and flexible (all Slicer features are available and there are additional features that are only available in the immersive view), it is just not possible to activate things from the immersive view yet. So, the missing piece is really just a convenient immersive GUI.

1 Like

Andras, off course I want to contribute, unfortunately I donĀ“t know anything about programming, but will be glad to help on any other matter at my reach. Iā€™m using a standard Oculus Rift with two sensors. Eventually I can have access to an HTC Vive, but Oculus is much more practical.

Just to confirm what @lassoan has already said: after some technical hurdles the very next thing in SlicerVR will be the floating menus (see here). We wanted to do this for almost two years now, but now that VTK9 is integrated with Slicer, we need to hammer out a few details and then we can work on the in-VR UI. Please note that there is no dedicated funding for this, so it will be done in our free time, so you may need to be patient even though this is a project Iā€™d really love to work on. As an alternative you can help get funding in which case please contact me.