Just wanted to post my workflow for a recent case I prepared for one of my partners. It’s a set of conjoined twins. The AI applications obviously weren’t much help, but was able to segment the skeleton, the two hearts, the liver, and some of the vasculature. Exported to STL and printed with an SLA printer. Other than the “repair model” option in Chitubox I didn’t have to do much to clean up the model once exported. Mostly created using flood fill and local threshold tools. Didn’t use any of the modeling extensions since I’m less familiar with those. Any comments or recommendations about the model creations are welcome, since I’m always looking to learn more efficient ways to do this.
I would recommend to try Colorize volume and Lights modules in Sandbox extension. Blender, Unreal, Unity, etc. are great for artistic visualization, but these modules allow rich and scientifically accurate visualization of segmented images.
Will definitely give those a try. The lighting options in unreal are fairly straightforward, and the VR interface seems a bit better. But if there are lighting options and the ability to apply textures in Slicer that might help with visualization. In this case I just did for fun.
Is there any benefit to creating models vs just exporting to STL if I’m just planning to print them? Have been exporting straight from segment editor thus far. Right now I’m not too focused on accuracy since I mainly just print models for the pectus excavatum kids after their surgery. However, I’m looking to create some custom neonatal models to use is laparoscopy/robotic simulation so may be more important for that. Right now plan is to export model, then import to Fusion 360 to add things like hinges, moving parts, etc.
The fundamental limitation of exporting a mesh to Blender/Unreal/Unity is that you don’t use all the density information that is in the CT. You can do much better! You can use Colorize Volume module to create an image that preserves all details of the CT but modulates color and overall opacity of structures based on the segmentation. This is just a much richer and more faithful representation of the original image data than any of the purely surface mesh based visualization approaches could ever achieve.
Thanks for the response. Will definitely give this a shot! Anything that increase the fidelity of the model is going to help me out big time. Appreciate it.