Save volume rendering as STL file

Hello, I am a new user on 3D slicer.

I was using the display preset feature under volume rendering, and I was wondering if there is a way to save what I was viewing as an .stl or 3D printable file.

For example, I was viewing a sample MRI using the CT-cardiac3 preset display.
When I tried to save that specific 3D preset displayed sample in a .stl file, the option was unavailable.
I was only able to see .vp (volume property), .txt formats.

Is there a way to accomplish what I desire in 3D slicer?

This is the image of the 3D preset that I would like to try converting into .stl file.

2 Likes

Volume rendering is just a display technique - to get an STL file you need to segment the volume using Segment Editor module.

1 Like

Hi Andras,

Thank you for the response.

One thing I notice from the segmentation is the lack of details of the soft tissues within the 3D model.
Is there a way to retain the details of the tissues in the 3D models in segmentation and save that as a .stl file?

1 Like

Volume rendering displays a semi-transparent cloud, but for 3D printing you need hard boundaries that exactly define what is inside/outside. Finding exact boundaries is often a difficult task - that’s why all the segmentation tools are developed.

Typically the issue is at low-contrast regions. You may need to apply smoothing, apply manual corrections, or use semi-automatic tools to have nice, smooth, and correct boundaries in these regions.

2 Likes

If I would like to retain the semi-transparent cloud for other purposes besides 3D printing, could that semi-transparent 3D display model be saved as a .stl file?

Semi-transparent volumetric cloud = your image data, so you already have it. No processing is performed on the data, it is visualized directly using raycasting. Volume rendering visualization parameters are mostly defined by transfer functions (opacity, color, gradient), which are saved in the Slicer scene.

STL file cannot store volumetric cloud, it only stores a surface mesh = hard boundary of your printed object.

What confuses most people that volume rendering may give the illusion that there is a distinct surface computed from the data, but actually what you see is a volumetric cloud.

4 Likes

Thank you for your very insightful response.

I have one final question.
Is there a way to export the volume rendered 3D model so that it can be viewed outside of 3D slicer?
In other words, can there be a way to save the 3D model (with volumetric cloud) so that it can be viewed in other 3D programs (ex. unity3D) ?

No 3D model is generated for volume rendering. The input for volume rendering is the raw volume data, so the only thing you can export is the visualization settings (transfer functions). You may find a volume renderer for Unity and other software, but in general if you want to use your volumetric images in modeling software then you need to segment them.

1 Like

Okay, thank you so much Andras!
I really appreciate your help.

@lassoan Thanks for this thread and your explanations. I have looked into the results of segmentation procedures but I would like to rebound on this topic. I am currently developing mostly in Unity3d for VR&AR platforms, inter alia for surgeons using detailed soft tissue models for decisionmaking. If I could bridge the output of the volumetric rendering results in 3dslicer to Unity3d, this could prove to be a great feature but as OP mentioned, I have the same export problem since I’m really interested in getting an .FBX, for example. To segment every part of an organ for example is tedious work and 3dslicer does not provide a lot of export paths I could work around. Do you know of any other pointers I could look into? The volumetric rendering does such a perfect job and I’d be a shame not to leverage those results into a widely usable format. Many thanks!

FYI we have a colleague visiting BWH from Basel who has a very nice volume renderer that runs at frame rates fast enough for virtual reality displays. We are able to share some data from Slicer to his system but the connection (we can share volume data bur the transfer functions are independent). Philippe’s system is not open source but he’s been interested in academic collaboration and may be able to share executables. We’ve definitely discussed making his code into a Unity plugin.

Of course if there’s a way to leverage the VTK or other open source volume rendering directly in a VR headset that would be easier to integrate with Slicer, but that’s still a work in progress.

https://www.unibas.ch/dam/jcr:1322adee-338c-4974-91e0-0ef95c061657/SpectoVive_1000x500.jpg

Volume rendering is equivalent to sculpting from colored semi-transparent material. Surface rendering is equivalent to painting on a semi-transparent surface. There is no conversion between them. Volume rendering does not even use a surface mesh while fbx file only stores a colored surface mesh. No matter how you process your data, surface rendering will not be able to reproduce the same look as volume rendering (except the special case when you only have 100% opaque objects with hard edges).

If you want to see volume rendering in Unity then you have to implement a volume renderer or find a volume renderer implementation for Unity (there are some, I don’t know how usable they are). If you don’t want to deal with volume rendering in Unity then you have to create a surface mesh by segmenting your image.

Thanks @lassoan, @pieper ! I had to do some reading but I get that it’s a display technique with no actual data transformation or vertex/shape operations/generations on Slicer’s part. I was confused because the module does such a good job at segmenting (if only visually) with one click whereas going through the segment editor is tedious and only semi-automatic. So obviously my brain wanted to take the easy road :slight_smile:

I guess the alternative is then, like Andras wrote, to work on a pipeline which exports the segments in .stl and import them in Unity3d. Not a good solution however. Not being passive-aggressive but it’s ironic that Slicer implements stereoscopic viewing but no bridge to AR/VR platforms :slight_smile:

@pieper many thanks for the link to Spectovive. I don’t know if you have worked with SteamVR or a HTC Vive but this application, with identical functionalities, is a mini-game in a game called “The Lab” produced by Vive and shipped with the HTC Vive to showcase interactions. So I’m now wondering if the colleague you mentioned maybe had to do something with it. But please see for yourself, here’s a link at the correct time in the gameplay video (link).

FInally, I will leave a few links here if other people stumble on the thread in the future.

Please keep in mind that Slicer is an open-source platform with an enthusiastic community, and virtually no explicit funding for development any more. So contributions are welcome!

I agree. I say this mainly because the software is at a quite sophisticated level while what I’m interested in is fairly trivial, not for me but in the greater picture. Would I be able to build this myself I would love to help!

Folks are working on OpenVR support in Slicer at the Project Week in London Ontario this week:

http://wiki.imaging.robarts.ca/index.php/2017_Slicer_Western_Week/Virtual_Reality_and_Slicer

I made an account just to thank you all for this conversation. I’ve been messing around with unity and apple ARkit recently, and I was nearly tearing my hair out trying to figure out how to do this/ if it was possible. I’ve been bouncing back and fourth from InVesalius to Slicer with zero luck until I found this thread! So I’ll end my wild goosechase and settle on my boring stl for now. I completely agree with OP that this ability to somehow “export” that rendering (even though not possible) would streamline game development and applications within VR and AR. Cheers and thanks again!

Note that augmented or virtual reality does not require STL. Slicer can already render beautiful volume rendered scenes without segmentation in virtual reality headsets, in color, in real-time, even in 4D, on any OpenVR-compatible headset (HTC Vive, any Windows mixed reality headsets, Oculus Rift, etc.), by a single click. Virtual reality provides solutions for many use cases that previously was only possible to address by 3D printing. Of course, for some cases 3D printing is still needed and the volumetric printing is a really nice option.

Forgive me if I missed something here. But you’re basically saying that Volume Rendering works by generating a point cloud based on the intensities of the pixels from the input image stack (CT, MRI. US, etc.). Is there a way to export this point cloud to another program (like Meshlab) to generate a mesh and subsequently an STL?

Also, the Segment Editor is a fantastic tool in that it allows the user to define thresholds to filter out undesired noise. But in my experience, there are certain instances when both the thresholding and edge detection/fill between slices methods don’t capture all the necessary data, resulting in a segmented model that has to be further processed to achieve a manifold model. Personally, I think that the point cloud generated by the Volume Rendering module converts the input image stack to a rendered cloud perfectly and if there was a way that we could generate an STL, in theory it would have almost perfect contours that match the sample’s anatomy.