Hello,
I am exploring ways to integrate Slicer 3D.
I am looking to use SlicerVirtualReality. Can someone help me with file format that gets IN and Out of SlicerVirtualReality? This will be helpful to explore its compatibility with Magic leap.
As the MagicLeap is not an OpenVR supported device, it cannot be used currently with Slicer Virtual Reality. We are exploring the possibility of moving to OpenXR, but are looking for help in this area.
Thank @adamrankin . I am looking to use this integration for my research. I am open for collaboration. Please let me know if we can work together on exploring this.
I’m not quite sure I follow. VR just shows you your current 3D scene in a VR environment. Anything you have in your 3D view on your desktop, you’ll see in VR.
yes, but there is communication between slicer and VR environment through SlicerVirtualReality plugin. Thats the reason the 3D scene in slicer (desktop), shows up in VR. I am trying to understand what kind of communication is that?
@cshreyas VTKRenderingOpenVR has a special VTK render window that renders two scenes that are transferred by OpenVR to the headset.
@adamrankin We are waiting for the result of a grant in which we included OpenXR integration. We should get word within a few weeks (they promised by the end of March but who knows).
Thanks @lassoan, @cpinter . Of what I heard from Magic Leap support, they have no plans to support OpenXR in near future.
I am trying to use OpenIGTLink as a bridge between Slicer and Magic Leap. This would have to do be done using OpenCV most likely and publish the point cloud data to a streaming server.
Do you have any pointers on where to get started from the slicer code base? Where are the images getting published in Slicer?
You may consider moving to a device that has more certain future and supports OpenXR, such as Microsoft Hololens.
What data do you plan to send between Slicer and Magic Leap?
Could you clarify? What is the role of OpenCV? What data you would like to get from/send to Magic Leap?
All incoming data (images, transform, surface meshes, etc.) appear as MRML nodes. If you set a MRML node as outgoing node then any changes in that node are automatically sent via OpenIGTLink.
What data do you plan to send between Slicer and Magic Leap?
I am exploring two options.
Send the image Voxel from Slicer through OpenIGTL to client. The client uses some volume rendering algorithm like Marching Cube and render this in OpenGL on the the device.
Or simply send live steam compressed video from Slicer to device and the device just un-compresses the video stream and display. (I have already a working version of the Marching cube algorithm)
Option 1 has 3 stages and could not much performant.
Option 2 looks ok except that the client could just be publishing the video.
If rendering capabilities of the Magic Leap are limited (slow CPU/GPU, no sophisticated visualization toolkit, such as VTK) then I would render remotely (in Slicer) and just stream the rendered 2D images. This is also beneficial because you don’t k ow when Magic Leap has been on the brink of bankruptcy for a while now, so it is better to minimize the time you spend getting to know it and develop code that runs on it.
If you render using Slicer then there is no need to use marching cubes, because you can get much higher quality images with more details, colors, depth perception at magnitudes faster rendering speed using volume renderering module (raycasting).
If you send rendered images then you don’t send individual slices but just the fully rendered left and right images, probably as a single image, in a side-by-side or over-under configuration.