SpatialLabs stereoscopic display

I recently purchased the SpatialLabs monitor - the pro version and was wondering if anyone had experience with this or something similar. The display works with 2 eye tracking cameras and a lenticular display to show a 3 dimensional image without requiring glasses. The software is relatively barebones but so far I have been able to view STL and OBJ files successfully. There is also developer notes of how to integrate with OpenXR and Unreal engine but I have limited experience with those.

I would like to find a way to view volume renderings in the display somehow., since that would allow viewing in the OR without 3D glasses or other specialized hardware. I found some prior forum posts about OpenXR integration into 3D slicer but I’m not sure what had come of it. Basically seems like I have to make the program think that its output is either going to a VR headset or a stereoscopic display. Any suggestions?

Recent efforts in stereo viewing focused on AR/VR headsets, because these devices made most 3D displays obsolete. Headsets are more portable, cheaper (Meta Quest headset is $300-500), provide larger field of view (full immersion), and offer full 6-degree-of-freedom viewing and interaction. When you need to see the surrounding real world then you can use an AR headset, such as the HoloLens. Both AR and VR headsets are usable in Slicer view the SlicerVirtualReality extension.

For headset-free viewing you can use holographic displays, such as the LookingGlass, which is already supported by Slicer. Holographic displays have the advantage that they can be viewed by many viewers (no head tracking is needed).

Single-user 3D displays don’t really have much place anymore other than maybe they can try to compete in price or image resolution. I think such monitors (such as the zSpace) are already usable with Slicer (maybe via SlicerVirtualReality extension?). If SpatialLabs provide OpenXR interface then you can probably use SlicerVirtualReality extension. Otherwise you can ask the manufacturer to contact Kitware about how they could add VTK support to their monitor.

I appreciate the response…I’ve used the looking glass, but hardware requirements are steep and price rises dramatically once you move beyond the Portrait device. The spatial displays seem like a better intra-operative option, since resolution and performance are better on most hardware and there is no need to wear specialized glasses or VR headsets. If you haven’t tried using one of the eye-tracking monitors I highly recommend it. Main drawback is that it’s 1 viewer at a time, but thats less of a concern using in the OR as reference.

Will definitely try your suggestions. The volume renderings give a lot more detail than the segmentations (at least when I do it) and would be great to see how it looks on there.

Slicer can now display volume rendering directly in the HoloLens2. The huge advantage of augmented reality headsets compared to 3D displays is that you can place the volume inside the patient at the correct physical location and use it for guidance. Simply manually aligning the visible skin surface can be sufficiently accurate for larger, superficial targets. There are software libraries for automatic alignment and more accurate tracking, but those are not yet integrated into Slicer.

Unfortunately it seems like resolution on the AR goggles isn’t quite there yet. The Ultraleap is pretty good but viewing window is small . Hololens 2 has similar issues but we have investigators who use it for telestrating during simple procedures. Use VR and AR (Quest 2 and Quest 3 passthrough) with MedicalHolodeck program for planning approach and 3D slicer for fast volume rendering or segmentation/printing.

Had been struggling with a good intraoperative solution for a roadmap-type reference, since most of my partners wouldn’t be willing to wear hardware/goggles in the OR. These seem to fit that need - have 2 cameras which t ack eye and head movements and adjust the view to your position. Illusion is very convincing and have trialed some segmentations with it and had good results. Problem is those take time and sometimes volume renderings get the job done. Version I’m using is Acer SpatialLabs View Pro, but there are nicer versions made by Sony and others.

Looking glass product is excellent (I have a LKG Go preordered), but cost/size is an issue and requires significant graphics hardware to render from so many angles. It’s the only non-goggles solution I’m aware of with group viewing though.

Thanks again for your response. I don’t have any programming experience so your replies here and in the forums have been very helpful. If you know of any AR overlay capabilities for laparoscopy I would be very interested to hear about it since most of my cases are done that way (pediatric surgery).

The HoloLens2 is excellent for in situ visualization. Both the field of view and resolution is sufficient.

If you just want to display 3D images somewhere above the patient then a 3D monitor is usable for that. However, stereopsis is only one of the many depth cues (you also perceive depth from lighting, motion, occlusion, size, texture, etc), so while using a stereo display improves depth perception, it is not a game changer. The proof for this is that despite the 3D TV boom in the early 2010s when stereo displays were available at a very low pricepoint at many sizes, using various technologies (including active glasses, passive glasses, and without glasses), it still did not get traction in clinical use. It is still possible that 3D monitors will make a comeback (maybe because you find some really good applications), but right now it seems that augmented reality headsets have more potential.

I might just have to give the Hololens 2 another try then. Still fairly cumbersome in the OR but not too many options. An I’ve learned that the only way to really assess these devices is to wear them yourself. The listed display resolution is 1440x936, whereas the one I’m using is a 4K display with each eye field receiving 1920x1080. How detailed are these images when viewing through the device? The best use cases in pediatrics are things like conjoined twins and tumors with complex, irregular vasculature. Those can be really painful to segment, and some of the vessels are 1-2 mm in diameter (the IVC on some of these kids is around 1-1.5 cm).

The view on the spatial display reminds me a little of the old active shutter glasses and 3D, especially in programs that aren’t optimized for stereoscopic display (looks like a bunch of cardboard cutouts moving in parallel). Also if you don’t optimize focal length it’s easy to end up cross-eyed. 10 years ago I owned both a 3D vision monitor and a 3D television , but the technology seems to have improved significantly since then. Again thanks for your insight - not much experience with this kind of thing in pediatric surgery world so its nice to discuss with someone who knows this stuff.

How useful do you find the additional depth cue of the 3D monitor? What is that you can see on the 3D monitor that you cannot already see on a regular 2D monitor? You can perceive depth more directly and slightly move your head to see a bit behind structures, but I’m wondering if this is something really significant.

Regardless of resolution, price, ease of use, etc. - 3D monitors cannot compete with the HoloLens unless they make the image appear as floating inside the patient. The main challenge in using the HoloLens is not about image quality, but

  • how conveniently (and quickly and accurately) you can align the virtual model with the patient
  • how to keep the model position and shape up-to-date during the procedure as things move around
  • how to interact with the device: there are many options - hand gestures, voice commands, controller in sterile bag - but each has its own limitations
  • how to wear it: it is quite comfortable, but you may still want to flip it up or remove from your head beause it still darkens the view a little bit or may add some extra glare, and may be in the way if you want to use a microscope

Still, if you think seeing the 3D model inside the patient in the correct physical location could be useful then it is worth a try. You would need a technician to help with it in the OR (prepare the visualization, put and remove the device from your head, help with controls, etc.)

From what I’ve seen, just putting the volume renderings or segmentations on the OR monitors can be difficult to interpret unless you’re able to move the model. Our current solution to this is to connect our laptop workstation (RTX 4070 with 64 GB RAM) to the DVI or HDMI port on the boom in the OR. The image can then be shown through as many displays as we would like in the OR, usually between 2 and 4. Movement of the model is done using a Leap Motion 2 controller, which allows you to manipulate the model and preserve sterility. What the 3D display does is minimize the amount of manipulation required to gain an understanding of the image.

I think we might be using these for different things. For us, AR overlay during open surgery may not add much because the structures are small in general, and you can usually identify the limits of solid organ tumors on palpation. The challenging open situations for us are ones in which there are complex networks of aberrant vessels within a tumor. The visual fidelity on the Hololens may not be enough to overlay multiple 2-3 mm vessels and follow with manipulations of the tumor about its axis (but I would be happy to be proven wrong).

For us the focus is on the preoperative planning phase or providing a roadmap for reference in the OR . The VR headsets provide good pictures for the first part but are not great for the second. The other consideration for us is that a lot of our complex cases are done under magnification with loupes on. Transitioning to the Hololens and back throughout the case would be cumbersome and might be difficult to maintain sterility. I’m curious how you and others have dealt with this in the past.

Do you have particular cases where you find this to be especially helpful? There is some overlap between adult and pediatric surgery but I’m always interested in finding new ways to improve how we do things. A lot of our complex cases are done under laparoscopy or robotically (choledochal cysts, anorectal malformations, etc). AR solutions that provide overlay data during laparoscopy or open with under 2.5-3.5x magnification would be ideal, but I don’t think those technologies exist yet. Thanks again for answering my questions. I’m still learning as I go here.

This is indeed useful. I would just add that there are many other ways to improve depth cues or make the images easier to interpret. For example, we recently added colored volume rendering and ambient shadows (see some example images here and here), which can be used in addition or instead of stereo volume rendering to greatly improve understanding of the 3D renderings.

I agree, these are two quite distinct use cases. The HoloLens is already proven to be useful for large and superficial targets, for low-accuracy applications (e.g., give confidence to surgeons in determining skin incison location), but may not be ideal for microsurgeries.

Lots of solutions were developed for displaying image overlay in laparoscopes or microscopes in the past 20 years, but they have not become widely used clinically - probably because they did not work that well in practice. Seeing recent progress in imaging AI, it is quite likely that real-time AI image annotations will become available in products of all the large laparoscopy vendors within a few years.

To add AR to surgical loupes, maybe the easiest solution could be to use digital loupes (like nuloupes or mantis) and external tracking.