Exporting models for surgical navigation

Operating system: WINDOWS
Slicer version:4.6.2
Expected behavior:
First and foremost, thank you for the outstanding work with 3DSlicer, it has been very useful in my research.

I am currently taking DICOM data from CT scans, converting into STL to work with implants that (as you know) usually come in STL format. However, I am having some issues figuring out what is the most efficient way of then re-converting the “planned” surgery model into a DICOM. I have read some of the posts on the wiki/blogs that you guys have posted, but I am still a bit confused about how to define those optimal volumes. The goal of converting back to DICOM is to be able to display it on a surgical planning/navigation device. Is there any easy way to do this?

In a 2013 post (http://slicer-users-archive.65878.n3.nabble.com/STL-model-to-dicom-td4025720.html) you referred to: https://www.slicer.org/wiki/Documentation/4.2/Modules/ModelToLabelMap

Any other leads I could use?

Many thanks,
Best wishes,
Actual behavior: STL to DICOM conversion is not always reliable using the method that I am using.

What navigation system do you use? How would you like your volumes to be exported: as RT structure sets, segmentation objects, or fake CT volumes?

Use the latest nightly version of Slicer and try to follow these instructions: https://www.slicer.org/wiki/Documentation/Nightly/Modules/DICOM#DICOM_export

Also see this tutorial for information about how to bring existing stl files aa segmentations (that can be converted to/from surface and labelmap formats):

https://www.slicer.org/wiki/Documentation/Nightly/Training#Segmentation_for_3D_printing

Hi Andras,

I am using Medtronic’s StealthStation7. The goal here would be to get the CT scan of the patient (DICOM) convert into STL (straightforward) and then use implants or play with the reconstructed CT to prepare for surgery. Once that is done I want to be able to re-upload the STL file as a DICOM to be able to Navigate the surgery. Hence, maybe fake CT volumes is the way to go?
As of now, aside from the things that I mentioned, I also tried to upload a “black volume” made of 150-200slices approx, to load the stl, label to volume module, and after that I created the DICOM series. This doesn’t work great as when I load it into the StealthStation the results appear as white borders on a black screen… What do you reckon would be the best way to go given the overarching goal of planning surgeries using both the original CT image, STL from implant/prosthetic manufacturers and then wanting to export it as a DICOM?

Many thanks,
Best wishes,

Hi Steve,

I checked out the tutorial. It is a very powerful tool, however, the result is exported as an STL file for 3D printing (which is the conventional way of doing things and what makes sense for 3D printing) https://www.youtube.com/watch?v=Uht6Fwtr9hE

The issue I have comes from when I want to re-import the results into a navigation system (in my case Medtronic’s StealthStation) which doesn’t support STL files for navigation. It requires the use of DICOM to be able to navigate the surgery.
Many thanks,
Cheers,

You could generate fake CTs and relatively easily create model from them on the StealthStation (using direct volume rendering or segmenting with simple tresholding). However, it would be just a workaround and you would still need to redo the trajectory planning and would be limited to what StealthStation can do.

For exploring new surgical navigation techniques, you may use Slicer in the operating room. Slicer can receive registration and real-time tool positions from StealthStation and display it in real-time. See this demo for an example: https://youtu.be/UHmv5u-sB5g (left monitor is Slicer, right monitor is StealthStation). There is no visible time delay, no need for additional hardware, no need for repeating registration (Slicer retrieves patient registration automatically). Slicer can also retrieve the current planning image from the StealthStation, so you don’t need to set up DICOM networking or run around with USB sticks if you acquire real-time images with an O-arm and want to use that in Slicer.

Overall, if you use StealthStation and Slicer together then you can rely on a commercial navigation system (tracker, tools, patient registration, …) as usual, but in addition to that benefit from lots of advanced features and flexibility of Slicer - advanced registration, segmentation methods, image fusion with pre-operative or real-time intraoperative image registration (ultrasound, surface scans, optical spectroscopy, etc), display of custom tools, implants, interface with robotic devices, etc. See www.slicerigt.org and www.plustoolkit.org for more details or check out our lab’s Youtube channel at https://www.youtube.com/user/perklabresearch. These tools are all freely available and used under IRB approval for several procedures.

Hi Andras,

Thank you for your detailed response. It was really helpful. I’ll try both approaches with a demo programmed surgery and see what works better given our pipeline of navigated surgical products and based on surgical needs. Very interesting work on augmented reality for musculoskeletal injections! Many thanks,