Slicer to Magic Leap Integration

Hello,
I am exploring ways to integrate Slicer 3D.

I am looking to use SlicerVirtualReality. Can someone help me with file format that gets IN and Out of SlicerVirtualReality? This will be helpful to explore its compatibility with Magic leap.

Thanks

As the MagicLeap is not an OpenVR supported device, it cannot be used currently with Slicer Virtual Reality. We are exploring the possibility of moving to OpenXR, but are looking for help in this area.

Thank @adamrankin . I am looking to use this integration for my research. I am open for collaboration. Please let me know if we can work together on exploring this.

@adamrankin , can you let me know the file format that goes IN and Out of SlicerVirtualReality? Is it a video stream?

I’m not quite sure I follow. VR just shows you your current 3D scene in a VR environment. Anything you have in your 3D view on your desktop, you’ll see in VR.

yes, but there is communication between slicer and VR environment through SlicerVirtualReality plugin. Thats the reason the 3D scene in slicer (desktop), shows up in VR. I am trying to understand what kind of communication is that?

@cshreyas VTKRenderingOpenVR has a special VTK render window that renders two scenes that are transferred by OpenVR to the headset.

@adamrankin We are waiting for the result of a grant in which we included OpenXR integration. We should get word within a few weeks (they promised by the end of March but who knows).

1 Like

VTK developers had plans for working on OpenXR integration, too. It may worth asking on the VTK forum for latest updates.

2 Likes

Thanks @lassoan, @cpinter . Of what I heard from Magic Leap support, they have no plans to support OpenXR in near future.

I am trying to use OpenIGTLink as a bridge between Slicer and Magic Leap. This would have to do be done using OpenCV most likely and publish the point cloud data to a streaming server.

Do you have any pointers on where to get started from the slicer code base? Where are the images getting published in Slicer?

You may consider moving to a device that has more certain future and supports OpenXR, such as Microsoft Hololens.

What data do you plan to send between Slicer and Magic Leap?

Could you clarify? What is the role of OpenCV? What data you would like to get from/send to Magic Leap?

All incoming data (images, transform, surface meshes, etc.) appear as MRML nodes. If you set a MRML node as outgoing node then any changes in that node are automatically sent via OpenIGTLink.

Thank you for the clarification.

What data do you plan to send between Slicer and Magic Leap?
I am exploring two options.

  1. Send the image Voxel from Slicer through OpenIGTL to client. The client uses some volume rendering algorithm like Marching Cube and render this in OpenGL on the the device.
  2. Or simply send live steam compressed video from Slicer to device and the device just un-compresses the video stream and display. (I have already a working version of the Marching cube algorithm)

Option 1 has 3 stages and could not much performant.
Option 2 looks ok except that the client could just be publishing the video.

Can you please suggest the best option?

I am also open for other approaches.
Thanks

If rendering capabilities of the Magic Leap are limited (slow CPU/GPU, no sophisticated visualization toolkit, such as VTK) then I would render remotely (in Slicer) and just stream the rendered 2D images. This is also beneficial because you don’t k ow when Magic Leap has been on the brink of bankruptcy for a while now, so it is better to minimize the time you spend getting to know it and develop code that runs on it.

If you render using Slicer then there is no need to use marching cubes, because you can get much higher quality images with more details, colors, depth perception at magnitudes faster rendering speed using volume renderering module (raycasting).

Thanks Andras,
That make sense. I agree with you, we can just render in Slicer and pass the image Stream to Magic Leap.

I would like to stream all three planes. Should I use 2D images of 3 planes separately or should this be a 3D volume (UseStreamingVolume)?

Should I scream as image data or video data?

If you send rendered images then you don’t send individual slices but just the fully rendered left and right images, probably as a single image, in a side-by-side or over-under configuration.

Thank you, will try it out. Will get back when if I get struck.

Now that Magic Leap 2 is released packing a proper Ryzen APU, maybe, the perspective has changed?