Hi, I am trying to make a drill hole through a segmented CT scan. I was wondering if there was a way to make this in slicer and know the coordinates of the centre of the entry and exit points?
You can draw a trajectory line using Markups module. If you want to simulate drilling (drill a hole in the image) then you can use Segment Editor - Draw tube effect followed by Mask volume effect.
Thanks Andras, where do I find the draw tube effect?
Draw Tube effect is provided by SegmentEditorExtraEffects extension. If you install this extension then the effect will show up in Segment Editor.
thanks Andras, I use the effect but it just makes a solid tube, do I need to form a new segmentation to remove this from?
Is there also a way to export my line as a coordinate system?
sorry for all the questions
What is your end goal? Do you want to display trajectory as surface mesh or using volume tendering, in slice views, 3D views, virtual reality, augmented reality, as volume rendering 3D print it, export it to rgb viewer or to commercial surgical navigation system, or do surgical navigation Slicer, explore what is around the trajectory,… all doable in Slicer.
Please describe the driving clinical application, too (pedicle screw, spinal injections, craniostomy for subdural hematoma, breast/liver/prostate tumour, valve replacement, etc) because we have lots of experience to share about planning and guiding specific procedures in Slicer.
My end goal is to use the trajectory for navigation in slicer and also to produce surgical guides with the same trajectory in blender software, although I would like to be able to do this in slicer if possible. This is for placing a screw in the elbow.
I need to see the trajectory within the slices so I can check it does not exit the bone for drilling the screw.
thanks for offering to help
I would recommend this workflow:
If you don’t want to use a 3D-printed guide then you can set up optical or electromagnetic tracking as described in SlicerIGT tutorials.
I would also recommend to consider using ultrasound for real-time imaging and guidance. See for example this demo:
I was looking for assistance on this same topic, though I don’t think the video you shared helps my case. We are looking for a trajectory for a pedicle screw. We are able to have the start and end points show up on the model after going through blender and then to the AR headset, but can’t figure out how to get the trajectory line to be in the model. Thank you for your help.
There have been lots of work in Slicer for pedicle screw insertion planning and guidance (by augmented reality using HoloLens, ultrasound guidance, etc.).
There is a complete open-source implementation for HoloLens/Slicer see github repository here (and related paper):
No need to mess with Blender, but the scene is generated in Slicer and sent to the HoloLens directly from there in real-time. Rendering in this repository is done via Unity, but recent Slicer versions can render directly to any AR/VR headset (via OpenXR API). So, there is no need to use multiple applications and programming languages (everything can be implemented in a single Slicer application, in Python or C++).
What AR headset are you planning to use now that the HoloLens is dead? Video passthrough in Meta Quest 3?
Is it possible to establish landmarking so that the created image can be superimposed onto the patient’s reality?
Yes, sure. Tou can segment the skin surface, mark any anatomical landmarks, and draw a trajectory line (for example, using markups, segmentation). You can then put all these under the same transform, grab them with your hand, and manually align with the patient.
Is there any tutorial on how to draw a trajectory line automatically?"
Here’s an example of creating a line markup
import numpy as np
lineNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLMarkupsLineNode")
linePoints = np.array((-50,-50,-50), (0,0,0))
slicer.util.updateMarkupsControlPointsFromArray(lineNode, linePoints)
Is there any tutorial that explains how to successfully connect Slicer with Unity to interact with the HoloLens after inserting this code? Specifically, are there any additional steps or requirements to ensure smooth communication between Slicer, Unity, and the HoloLens?
You can use OpenIGTLink for real-time data transfer between Slicer and Unity (see for example above). It can work well for game-like experiences, such as surgical simulations.
However, for actual clinical applications, image-guided surgical interventions, building on a medical image computing platform (Slicer) should work much better than game engines (Unity, Unreal, etc.). You can create the surgical plan, review the images in slice and 3D views, do all the registration you need, including non-linear distortions, etc. do real-time segmentation and everything else you need in Slicer and then with a single click you can display that 3D scene in the HoloLens. The scene remains interactive both in the HoloLens and in Slicer, so you can keep assisting the user who is wearing the HoloLens. No need to synchronize information between multiple applications, everything works in a single environment, single process, single programming language (Python).
I am having a lot of difficulty using another model besides the pre-programmed spine in the repository. Is there any help available regarding these configurations?
I am having trouble setting up the connection with the clipping plane and pre-programming it for another desired model instead of the default one. Could you provide guidance on how to configure this?
Which repository, which pre-programmed model, what kind of difficulty you are experiencing, and where (Slicer or Unity)? Please describe in detail what steps you did, what you expected to happen, and what happened instead.
I am using the repository at GitHub - BSEL-UC3M/HoloLens2and3DSlicer-PedicleScrewPlacementPlanning: GitHub repository for the IJCARS paper: "Real-Time open-source integration between Microsoft HoloLens 2 and 3D Slicer", specifically the pre-programmed spine model. My difficulty lies in trying to apply the connection and clipping plane to a different anatomical model while using HoloLens. I managed to add a new model, specifically Segmentation 1, connect it to Slicer, and apply it to the HoloLens, but I am unable to use the functions, such as the clipping plane, with the new model.
As shown in the attached images, I tried deleting the Spine model, but every time I press Play, it reappears automatically in the Models section, interfering with my attempt to work with the new Segmentation 1 model. Even though I apply the Clipping_mat and set up everything as expected, the Spine model keeps returning, and I can’t use the clipping plane or other intended functions with the new model.
Could you please provide guidance on how to stop the Spine model from automatically reappearing and how to properly configure the system so that I can work with other models and use the functionalities like the clipping plane?
Others (maybe @AliciaPose) might be able to comment on how to use Unity.
My advice would be to render to the HoloLens directly from Slicer, using Slicer’s VirtualReality extension. It just simplifies everything and you can do so much more.