Save volume rendering as STL file

I agree. I say this mainly because the software is at a quite sophisticated level while what I’m interested in is fairly trivial, not for me but in the greater picture. Would I be able to build this myself I would love to help!

Folks are working on OpenVR support in Slicer at the Project Week in London Ontario this week:

http://wiki.imaging.robarts.ca/index.php/2017_Slicer_Western_Week/Virtual_Reality_and_Slicer

I made an account just to thank you all for this conversation. I’ve been messing around with unity and apple ARkit recently, and I was nearly tearing my hair out trying to figure out how to do this/ if it was possible. I’ve been bouncing back and fourth from InVesalius to Slicer with zero luck until I found this thread! So I’ll end my wild goosechase and settle on my boring stl for now. I completely agree with OP that this ability to somehow “export” that rendering (even though not possible) would streamline game development and applications within VR and AR. Cheers and thanks again!

Note that augmented or virtual reality does not require STL. Slicer can already render beautiful volume rendered scenes without segmentation in virtual reality headsets, in color, in real-time, even in 4D, on any OpenVR-compatible headset (HTC Vive, any Windows mixed reality headsets, Oculus Rift, etc.), by a single click. Virtual reality provides solutions for many use cases that previously was only possible to address by 3D printing. Of course, for some cases 3D printing is still needed and the volumetric printing is a really nice option.

Forgive me if I missed something here. But you’re basically saying that Volume Rendering works by generating a point cloud based on the intensities of the pixels from the input image stack (CT, MRI. US, etc.). Is there a way to export this point cloud to another program (like Meshlab) to generate a mesh and subsequently an STL?

Also, the Segment Editor is a fantastic tool in that it allows the user to define thresholds to filter out undesired noise. But in my experience, there are certain instances when both the thresholding and edge detection/fill between slices methods don’t capture all the necessary data, resulting in a segmented model that has to be further processed to achieve a manifold model. Personally, I think that the point cloud generated by the Volume Rendering module converts the input image stack to a rendered cloud perfectly and if there was a way that we could generate an STL, in theory it would have almost perfect contours that match the sample’s anatomy.

Yes, of course! The point cloud is the 3D image volume file (usually saved in .nrrd file).

MeshLab and other mesh editors operate on meshes, they cannot load or display volumetric image data.

STL is for storing surface mesh that you generate by segmenting a volume. Surface meshes can be printed using cheap plastic printers.

If you have access to a color voxel printer then you can 3D print volume rendering directly, using images created by SlicerFab extension’s BitmapGenerator module.

Volume rendering does not generate a point cloud. It visualizes surfaces based on the intensity range and opacity values user specified in the transfer function. For STL and likes you need to use the segment editor and extract the surface you desire to keep. If single threshold doesnt define the structure you want to generate the model for, you will need to use the additional tools to make a cleaner segmentation.

Hi,

I just came across this discussion of seeing volume rendered in virtual reality.
For now I segment the heart and then see that in virtual reality.

  1. Can I do the same by simply converting 3D dataset into volume rendering and use in virtual reality.
  2. Can I load multiple phases and convert them into volume render model and see that cardiac motion in real time under virtual reality. If yes, how.
  3. I asked this another post as well. When I showed the virtual models to a surgeon, he had different requests. They are not that interested in seeing the inside of heart but rather fixing it. For that as I earlier asked, creating "accurate 3d patches for fixing holes, arterioplasties.

I know these are lot of questions, but would appreciate thoughts and help.

Thanks,
Sarv

There is nothing to convert, just go to the Volume Rendering module, show the volume you want, choose a fitting preset, and use the Shift slider to adjust the transfer function

You can show the proxy volume in volume rendering as I described above. When you’re playing the sequence, the volume will “animate”.

Please elaborate.

for now are you just using surface rendering to build the 3d model to see in VR?

what kind dataset do you have to convert into volume rendering?

cardiac motion image just one slice, right? not sure how you can render 3D from that.

Hi,
I did try using volume rendered CT to display as virtual reality but I was not able to do so for multiple phases. I had to shelf the project as I was unable to segment multiple phases of heart after multiple tries.

Sarv

A post was split to a new topic: Show volume rendered CT in VR

Hi,
Sorry for the silly question but I am totally new slicer and have been able to manage to make our own Neuronavigation, thanks for such an amazing software and community.

I was also having the same of exporting the volume rendering as a Model and cam across this amazing thread
This thread do discuss about exporting the transfer functions of volume rendering, but didn’t mention how to do that,
Can anyone please help me that.

Thanks
Nayan

Volume rendering is a visualization technique not a segmentation method; it does not create a 3D surface that can be exported as a model. To do that you need to use the Segment Editor and Segmentations module. See this image that explains different data representaiton in slicer and how they relate to each other

3 Likes

radiant can transfer volume rendering into stl file

I am not familiar with Radiant. A google search shows it as a DICOM viewer with limited capabilities (https://www.radiantviewer.com/).

While they didn’t list segmentation as feature. But perhaps it is available as a add-on of some sorts. Yes, you can save a 3D model as STL in Slicer too (after segmenting it). However, a 3D model is not created from volume rendering.

Radiant direct transfer volume rendering into stl file,so I think there must be a way in 3D slicer

The feature that people are looking for in this topic is the ability to automatically generate a colored surface model that looks like volume rendering. It is theoretically impossible to achieve this in general and RadiANT does not do this either. I guess RadiANT exports the usual noisy, monochrome isosurface to STL file, which is nowhere near what you can visualize with volume rendering. Maybe you could post a few links to example models or at least screenshots of those models.

If you use a surface mesh file format that can store full-color meshes (OBJ, PLY, …) then you could assemble nicer-looking surface models from many transparent layers, but it would not be usable for 3D-printing; and rendering of such semi-transparent models would be probably much more complicated and less efficient than direct volume rendering using raycasting. So, there is no incentive to implement generation of such models.

A post was split to a new topic: Dropping point on a rendered model