Hi, I have a few questions about Virtual Reality Implementation on Slicer
- I have a volume rendering that will not appear on Virtual Reality even though it appears on my 3D views. Why is this so and how do I fix it?
Ex: Perspective from Slicer Desktop:
Perspective from Slicer Virtual Reality:
Is there any way to implement segmentation via Virtual Reality (ex: using the remote to segment)?
I cannot see my Oculus touch controllers in Slicer, making it difficult to move around segmentations. Is there a setting I can change to make them visible?
Thanks for the prompt reply.
2) I will continue to follow the development of the VR segmentation tool. Sounds very cool and would be very helpful!
In regards to 1) and 3), I am currently working on slicer 4.10.1 but I still do not see any virtual reality view of the controller models (after checking off the “Controllers visible checkbox”) nor the volume rendering, even after restarting the app and recreating the scene. I have also downloaded the latest nightly version of Slicer 4.11.0 and still do not see the volume rendering nor controller models. Am I missing a step?
Make sure you updated to the latest version of SlicerVirtualReality extension (you can check for updates and then install updates in the extension manager).
Controller display works fine for me for Windows Mixed Reality headsets and HTC Vive. I don’t have access to an Oculus Rift, but if for any reason you don’t see the controllers, you can always show where the controller is by enabling controller transforms and then applying the transforms to any models.
Volume rendering works fine for me for Windows Mixed Reality headsets and HTC Vive. Can you share an example scene that has volume rendering that does not show up in virtual reality?
I have attached an example scene in gdocs:
How do I enable the controller transforms? They are visible on slicer as seen below:
, however turning on and off the view does nothing. If I apply the controller transform to the models I am still unable to see the controllers.
Thanks for the help.
In your scene, Volume Rendering is hidden in the virtual reality view. You can enable it in Volume rendering module / Inputs section / View:
As I wrote above, enable controller transforms (they appear in the scene, so you’ve done this correctly), but then don’t click on the eye icon of the transform (that would visualize the transform if you specify a region of interest), instead load any model (STL, OBJ, … file) that you would like to use as controller model and apply the controller transform to that.
By the way, is there a particular reason you chose an Oculus Rift headset? If you need high quality (absolute tracking, additional controllers, high resolution, etc.) then HTC Vive headsets are better, if you need something inexpensive or easily portable (no need for external trackers) then Windows Mixed Reality headsets are better.
Thank you so much for all the help! The reason I’m using Oculus is because I’m borrowing it temporarily from someone else to explore the virtual reality feature on slicer.
Hi, sorry I had another question about the views. Is there a way through Python to automatically set views to include the “virtual reality view”? I’ve attempted to do it with getting the volume rendering node:
volumeRendering = slicer.mrmlScene.GetNodesByName(‘GPURayCastVolumeRendering’).GetItemAsObject(0)
and setting the view of the volume rendering node like so:
however, doing so does nothing. I’ve also noticed the vrViewNodeID is different from a regular view node ID (vtkMRMLVirtualRealityViewNode vs. vtkMRMLViewNode). Is there anyway to do this?
This should not work, and you probably got an error as well. Instead, call AddViewNodeID
Thanks for the help,
Just wanted to write an update for this that I wrote a small work around for the “eraser effect” in Virtual Reality (in the meantime before you integrate the Qt GUIs in VR). Through this code, a sphere segmentation is created in VR that gets transformed to the right hand controller. Then, when I move the sphere around and it intersects with a part of a segmentation that needs to be erased (ie the spine) , I press a button on the keyboard that erases it (by using “subtract” in the “logical operators” effect). What I am wondering is that every time I press this button on the keyboard the Virtual Reality screen goes black for 1-3 seconds before an altered scene is shown. Is there anyway to fix this or is this inevitable as Slicer is running background processes?
The problem should go away if the segmentation is updated more quickly. For example, you can disable smoothing (checkbox in “Show 3D” button’s drop-down menu) to make 3D view updates magnitudes faster.
We are looking to be able to control the ROI in VR with controllers. Has anyone been able to figure out visualizing and controlling the ROI in VR space?
You can move the ROI without any programming, by adding a model that the user can grab and set the same parent transform for the model and the ROI.
If you also need to change the ROI size with the controllers then one option is to add a few models to the scene (each serve as handles that can be grabbed and moved using the controllers) and implement a small python script that adjust the ROI position/size based on parent transforms of these handles.
Hi, this is for @lassoan and @cpinter:
Are you familiar with Tom Goddard’s ChimeraX software? It’s an Open Source software designed for chemical purposes, but it can handle DICOM images (including some limited Volume Rendering and Surface features) and has the best virtual reality module I’ve seen so far. ChimeraX can somehow display floating standard menus in the VR environment, so you can access any command you want, and interact with them in real time inside VR (including configuration of Volume Render histograms, etc.). Is it possible to reach this kind of staff in 3D Slicer in the near future?
ChimeraX has very limited Volume Render and Surface Segmentation capabilities, I imagine a sort of mutual collaboration to improve both developments… what you think?
These features are already available in VTK’s virtual reality interface and we are ready to take advantage of it. We just need to update to latest VTK version, which is taking longer than anticipated (it is a work in progress, see the list of open issues here). I would expect that the update will be completed in a couple of weeks. After that it should not take more than a few weeks of work to make the menus available in virtual reality, but I’m not sure when the work will start. @cpinter and I have always several projects to work on, but may still find time. Things can go much faster if you can join the efforts or anyone else show up who would be interested to contribute.
Slicer’s virtual reality support is actually very powerful and flexible (all Slicer features are available and there are additional features that are only available in the immersive view), it is just not possible to activate things from the immersive view yet. So, the missing piece is really just a convenient immersive GUI.
Andras, off course I want to contribute, unfortunately I don´t know anything about programming, but will be glad to help on any other matter at my reach. I’m using a standard Oculus Rift with two sensors. Eventually I can have access to an HTC Vive, but Oculus is much more practical.
Just to confirm what @lassoan has already said: after some technical hurdles the very next thing in SlicerVR will be the floating menus (see here). We wanted to do this for almost two years now, but now that VTK9 is integrated with Slicer, we need to hammer out a few details and then we can work on the in-VR UI. Please note that there is no dedicated funding for this, so it will be done in our free time, so you may need to be patient even though this is a project I’d really love to work on. As an alternative you can help get funding in which case please contact me.