Short answer:
- If possible, use virtual reality instead of augmented reality: it is already well supported within Slicer and much more mature in general.
- Augmented reality is not ready for real-world use yet, but you can implement quick prototypes using Unity for early feasibility tests. Slicer can be used to create surface models that these prototypes can use and. We plan to have OpenIGTLink-based real-time transfer of meshes from Slicer to Unity-based applications (not ready yet, contributions are welcome).
Long answer:
First question is why would you connect a HoloLens (or a Magic Leap, which can do essentially the same) to Slicer. What application for augmented reality do you have in mind?
We’ve been evaluating HoloLens for various clinical applications for 2 years (burr hole location planning - currently tested in a study in the OR on patient cases, surgical skill training, anatomical training for needle insertion, etc.) and we find that while the technology is very promising, current headsets have still very significant limitations. The most promising use of augmented reality would be in situ visualization within arm-length distance, but unfortunately none of the current headsets (HoloLens, Magic Leap, Meta, etc.) can do that, due to focal plane placed at around 100-300cm in instead of 40-70cm: you cannot see virtual objects and real objects at the same time (you lose sense of depth, so you cannot align shapes in 6dof). There are secondary issues, such as instability of virtual objects (you often get errors of 3-5mm error when you move around an object), size and weight of headset, and lack of computational power on untethered devices.
HTC Vive Pro has video pass-through capabilities, which allows using all the virtual reality infrastructure for augmented reality (SlicerVirtualReality could be used for this). However, image quality, lag, fixed focal distance, dynamic range under strong focused OR lighting might be problematic on video pass-through augmented reality.
Can you use virtual reality instead?
If you don’t need in-situ visualization at arm-length distance then you may just as well use virtual reality. It does not have any of the limitations listed above, they are ready-to-use for several end-user applications, inexpensive, and a single software interface (OpenVR) can be used for all major headsets (HTC Vive, Windows MR, Oculus Rift).
What visualization would you need?
If you only need rendering of surface meshes then you can use simple Unity applications to render them and implement simple interactions. Since headsets are not yet ready for real-world use anyway, these quick throw-away prototypes are appropriate. We’ve been working on implementing OpenIGTLink interface for sending over segmented models from Slicer to Unity-based applications (so that you don’t need to build and deploy a new application for each patient case), but it’s not ready yet.
Volume rendering is feasible, too, you can buy volume renderer from the Unity asset store for a few ten $. Computational capabilities of untethered headsets are limited and these volume renderers are not as sophisticated as VTK’s volume renderer, but might be OK for some applications.