Collaborative planning in Virtual Reality

Hi
I was wondering if there was any further follow up to the older topic Collaborative surgery planning in virtual reality ?

I have managed to get a good internal view of a heart using volume rendering by removing the lumen using masking - turning the volume hollow. Our interventional cardiologist was quite impressed with the ability to basically lean into the model and see the relationship between the RV outflow tract and the coronary artery. Originally he’d asked me to make a print, however viewing it in this manner was quicker and also made the decision on where to cut the model redundant.

Ideally being able to have two people looking at the same view would be of interest.
(if there has been no further development I’ll have a try at getting it going myself)

David

Hi David, This is Mohan. I am interested to work with you. I have been visualizing some of the Radiology DICOM models in AR. I am interested in Collaboration & single session meetups sharing the medical data digitally using AR & VR. Reply me, we take it further.

@mohan.kosireddy what software do you use / plan to develop?

@justdcinaus We use virtual reality for RVOT and other cardiac device implant evaluations extensively. Since single-user setup is trivial and other users can see the Slicer desktop screen and the the live virtual reality stream, we have making collaborative views easier to set up has not been high priority. For now, we can help with setting up shared scenes between multiple workstations and giving advice on how to automate this scene setup using Python scripting.

I use Blender, Autodesk Maya for 3D model enrichment apart from Slicer.
For augmented reality, I use Unity3d. Hardware : Magic Leap, Hololens

Unity3D has great hardware support (only Unreal engine comes close) but it is very limited when it comes to displaying and interacting with medical images.

There are use cases where Unity3D’s hardware support is essential (HoloLens, Magic Leap) and basic visualization capabilities can be enough - for example patient communication, medical training, and some simple interventional uses.

However, for surgical planning, you need more powerful tools than what gaming engines can provide. For example, in 3D Slicer’s virtual reality implementation you can display 4D CT or echo image of the beating heart (moving in 3D) using volume rendering while you are placing your device. You can complete the full workflow in one software environment, from DICOM import to final virtual reality visualization and interactions, no need for segmentation (as sophisticated volume rendering is available), no need for data exporting, uploading to the device, etc.

If the goal is surgical planning then I would not recommend to redevelop all the planning tools in a gaming engine (Unity3D or Unreal) but instead to use/customize/extend 3D Slicer’s Virtual Reality extension.

Hi Andras,

I wonder if there is any possibility to have AR. My understanding is the virtual reality extension does not support AR. So, I was thinking of Unity3D for AR.
I do more orthopedics and would love to register ‘3D surgical planning (implant positioning, osteotomy location/angle, etc)’ to the real patient based on anatomical landmarks, shape matching, or external coordinate systems. I understand hardware support is necessary.
Basically, I am interested in using AR for navigating surgery.

Thanks in advance.

Sun

Unity3D is reasonable choice today for displaying and interacting with surface models in AR.

You can certainly create a simple AR application that displays the surgical plan somewhere in the field of view. This is somewhat useful because the surgeon does not need to look up to an overhead monitor, but also somewhat less convenient because you need to wear a bulky device.

The big promise of AR is in-situ visualization, i.e., showing a virtual needle guide or intended location for an implant, so that you can align physical objects with virtual objects/guides. Unfortunately, based on my own experience and what I learned from discussions with other research groups papers: AR headsets that are available today are not suitable for this. Headsets today are not optimized for interaction within reaching distance (30-80cm) but the minimum distance is much farther (200cm for HoloLens, 100-200cm for Magic leap), therefore you cannot see a virtual object and a real object at the same position (your eye cannot focus on both objects at the same time). Tracking stability is not sufficient yet: we would need tracking accuracy of about 0.1mm or so to have total system accuracy of 1mm, but virtual objects on a HoloLens device may easily drift 5-10 millimeters (you can of course add external tracking, but that’s add a whole lot of other issues).

1 Like

Hi David,

Would you pls let me know how you managed to make the heart hollow. I have tried to use volume rendering and then using thresholds to try make heart hollow. Can you pls let me know how you implement tmasking the lumen and then hollowing heart. Would like to apply this approach.

Thanks
Sarv

Hello All,
I am trying to integrate Magic Leap (VR initially) with Slicer for detecting bur hole using MRI images. Has anyone been able to integrate Slicer with Magic leap?

Thanks

We (and several other groups) used the HoloLens for burr hole placement (see paper, video). Since minimum user interaction was needed (patient registration and after that just show/hide the skin surface, brain, and planned drill hole) and Unity already supported the HoloLens, we decided to use Slicer for creating the models and use Unity for displaying them in AR. After initial feasibility was demonstrated on dozens of phantom studies and 15 patient cases, we put the project on hold, because while the system worked for this simple, non-demanding clinical application, we were not confident that currently available technology can be effectively used for more difficult procedures (where higher accuracy and more complex user actions are needed and an AR system could have significant clinical utility).

Since we did not proceed further from initial feasibility, we did not complete our live Slicer/Unity bridge for sending of models and transforms to Unity from Slicer. Still, you might find bits and pieces of the software that we developed useful: HololensQuickNav, OpenIGTLinkUnity

Thomas Muender and his team from Uni Bremen worked on a Slicer/Unity bridge, too. See
Project week page and repository.

@Amine_Ziane recently asked about using Unity for zSpace device - see transfer scene files from 3DSlicer to Unity3D - #17 by Amine_Ziane. Maybe you can try to work together.

@lassoan , Thanks a lot for the information.
My actual goal is to use MRI images and create path planing for neuro robots. I am exploring options to import 3D Slicer images to Magic leap.

After browsing through various discussions in this forum few options pops up.

  1. As you mentioned, create bridge between 3D Slicer and Unity3D
  2. Create bridge between 3DSlicer and Unreal engine.

@Amine_Ziane , I would glad to collaborate. Please let me know.

Surgical planning typically occurs before the procedure in desktop 3D environment (but immersive virtual reality looks promising, too). Augmented reality in the operating room is extremely limited due to constraints on available time, space, sterility requirements, etc. What is your planned workflow?

Would you help me to make my Volume rendering thanks to my MRI images I have in unity ? please

Thats true. Augmented reality in operating room would be great but doubt the effectiveness with current limited AR capabilities and resolutions for life critical applications.

In my opinion, focusing in VR could be a good start for now.

1 Like

You can try to display exported surface models instead of using volume rendering. Structures are easy to visualize using volume rendering are most often easy to segment, too.

Native volume rendering packages in Unity are extremely limited. They cannot reproduce the same results as VTK, but if your specific needs are fulfilled by any of the volume rendering packages available for Unity then you can certainly use them. I cannot help with more specifics, such as which Unity asset to use, and how - I just remember that what we used was an inexpensive package that you could get for a few ten $ and it could do simple opacity mapping and plane clipping. If you deploy to desktop then you may be able to use VTK in Unity via Activiz (Kitware’s paid C# wrapper for VTK).

yes, I can explore the options.

okay thanks, so how can I display the exported surface models instead of using volume rendering in untiy3D ?

See response here: transfer scene files from 3DSlicer to Unity3D - #22 by lassoan