Save volume rendering as STL file

Okay, thank you so much Andras!
I really appreciate your help.

@lassoan Thanks for this thread and your explanations. I have looked into the results of segmentation procedures but I would like to rebound on this topic. I am currently developing mostly in Unity3d for VR&AR platforms, inter alia for surgeons using detailed soft tissue models for decisionmaking. If I could bridge the output of the volumetric rendering results in 3dslicer to Unity3d, this could prove to be a great feature but as OP mentioned, I have the same export problem since I’m really interested in getting an .FBX, for example. To segment every part of an organ for example is tedious work and 3dslicer does not provide a lot of export paths I could work around. Do you know of any other pointers I could look into? The volumetric rendering does such a perfect job and I’d be a shame not to leverage those results into a widely usable format. Many thanks!

FYI we have a colleague visiting BWH from Basel who has a very nice volume renderer that runs at frame rates fast enough for virtual reality displays. We are able to share some data from Slicer to his system but the connection (we can share volume data bur the transfer functions are independent). Philippe’s system is not open source but he’s been interested in academic collaboration and may be able to share executables. We’ve definitely discussed making his code into a Unity plugin.

Of course if there’s a way to leverage the VTK or other open source volume rendering directly in a VR headset that would be easier to integrate with Slicer, but that’s still a work in progress.

https://www.unibas.ch/dam/jcr:1322adee-338c-4974-91e0-0ef95c061657/SpectoVive_1000x500.jpg

Volume rendering is equivalent to sculpting from colored semi-transparent material. Surface rendering is equivalent to painting on a semi-transparent surface. There is no conversion between them. Volume rendering does not even use a surface mesh while fbx file only stores a colored surface mesh. No matter how you process your data, surface rendering will not be able to reproduce the same look as volume rendering (except the special case when you only have 100% opaque objects with hard edges).

If you want to see volume rendering in Unity then you have to implement a volume renderer or find a volume renderer implementation for Unity (there are some, I don’t know how usable they are). If you don’t want to deal with volume rendering in Unity then you have to create a surface mesh by segmenting your image.

Thanks @lassoan, @pieper ! I had to do some reading but I get that it’s a display technique with no actual data transformation or vertex/shape operations/generations on Slicer’s part. I was confused because the module does such a good job at segmenting (if only visually) with one click whereas going through the segment editor is tedious and only semi-automatic. So obviously my brain wanted to take the easy road :slight_smile:

I guess the alternative is then, like Andras wrote, to work on a pipeline which exports the segments in .stl and import them in Unity3d. Not a good solution however. Not being passive-aggressive but it’s ironic that Slicer implements stereoscopic viewing but no bridge to AR/VR platforms :slight_smile:

@pieper many thanks for the link to Spectovive. I don’t know if you have worked with SteamVR or a HTC Vive but this application, with identical functionalities, is a mini-game in a game called “The Lab” produced by Vive and shipped with the HTC Vive to showcase interactions. So I’m now wondering if the colleague you mentioned maybe had to do something with it. But please see for yourself, here’s a link at the correct time in the gameplay video (link).

FInally, I will leave a few links here if other people stumble on the thread in the future.

Please keep in mind that Slicer is an open-source platform with an enthusiastic community, and virtually no explicit funding for development any more. So contributions are welcome!

I agree. I say this mainly because the software is at a quite sophisticated level while what I’m interested in is fairly trivial, not for me but in the greater picture. Would I be able to build this myself I would love to help!

Folks are working on OpenVR support in Slicer at the Project Week in London Ontario this week:

http://wiki.imaging.robarts.ca/index.php/2017_Slicer_Western_Week/Virtual_Reality_and_Slicer

I made an account just to thank you all for this conversation. I’ve been messing around with unity and apple ARkit recently, and I was nearly tearing my hair out trying to figure out how to do this/ if it was possible. I’ve been bouncing back and fourth from InVesalius to Slicer with zero luck until I found this thread! So I’ll end my wild goosechase and settle on my boring stl for now. I completely agree with OP that this ability to somehow “export” that rendering (even though not possible) would streamline game development and applications within VR and AR. Cheers and thanks again!

Note that augmented or virtual reality does not require STL. Slicer can already render beautiful volume rendered scenes without segmentation in virtual reality headsets, in color, in real-time, even in 4D, on any OpenVR-compatible headset (HTC Vive, any Windows mixed reality headsets, Oculus Rift, etc.), by a single click. Virtual reality provides solutions for many use cases that previously was only possible to address by 3D printing. Of course, for some cases 3D printing is still needed and the volumetric printing is a really nice option.

Forgive me if I missed something here. But you’re basically saying that Volume Rendering works by generating a point cloud based on the intensities of the pixels from the input image stack (CT, MRI. US, etc.). Is there a way to export this point cloud to another program (like Meshlab) to generate a mesh and subsequently an STL?

Also, the Segment Editor is a fantastic tool in that it allows the user to define thresholds to filter out undesired noise. But in my experience, there are certain instances when both the thresholding and edge detection/fill between slices methods don’t capture all the necessary data, resulting in a segmented model that has to be further processed to achieve a manifold model. Personally, I think that the point cloud generated by the Volume Rendering module converts the input image stack to a rendered cloud perfectly and if there was a way that we could generate an STL, in theory it would have almost perfect contours that match the sample’s anatomy.

Yes, of course! The point cloud is the 3D image volume file (usually saved in .nrrd file).

MeshLab and other mesh editors operate on meshes, they cannot load or display volumetric image data.

STL is for storing surface mesh that you generate by segmenting a volume. Surface meshes can be printed using cheap plastic printers.

If you have access to a color voxel printer then you can 3D print volume rendering directly, using images created by SlicerFab extension’s BitmapGenerator module.

Volume rendering does not generate a point cloud. It visualizes surfaces based on the intensity range and opacity values user specified in the transfer function. For STL and likes you need to use the segment editor and extract the surface you desire to keep. If single threshold doesnt define the structure you want to generate the model for, you will need to use the additional tools to make a cleaner segmentation.

Hi,

I just came across this discussion of seeing volume rendered in virtual reality.
For now I segment the heart and then see that in virtual reality.

  1. Can I do the same by simply converting 3D dataset into volume rendering and use in virtual reality.
  2. Can I load multiple phases and convert them into volume render model and see that cardiac motion in real time under virtual reality. If yes, how.
  3. I asked this another post as well. When I showed the virtual models to a surgeon, he had different requests. They are not that interested in seeing the inside of heart but rather fixing it. For that as I earlier asked, creating "accurate 3d patches for fixing holes, arterioplasties.

I know these are lot of questions, but would appreciate thoughts and help.

Thanks,
Sarv

There is nothing to convert, just go to the Volume Rendering module, show the volume you want, choose a fitting preset, and use the Shift slider to adjust the transfer function

You can show the proxy volume in volume rendering as I described above. When you’re playing the sequence, the volume will “animate”.

Please elaborate.

for now are you just using surface rendering to build the 3d model to see in VR?

what kind dataset do you have to convert into volume rendering?

cardiac motion image just one slice, right? not sure how you can render 3D from that.

Hi,
I did try using volume rendered CT to display as virtual reality but I was not able to do so for multiple phases. I had to shelf the project as I was unable to segment multiple phases of heart after multiple tries.

Sarv

A post was split to a new topic: Show volume rendered CT in VR

Hi,
Sorry for the silly question but I am totally new slicer and have been able to manage to make our own Neuronavigation, thanks for such an amazing software and community.

I was also having the same of exporting the volume rendering as a Model and cam across this amazing thread
This thread do discuss about exporting the transfer functions of volume rendering, but didn’t mention how to do that,
Can anyone please help me that.

Thanks
Nayan