Save volume rendering as STL file

Yes, of course! The point cloud is the 3D image volume file (usually saved in .nrrd file).

MeshLab and other mesh editors operate on meshes, they cannot load or display volumetric image data.

STL is for storing surface mesh that you generate by segmenting a volume. Surface meshes can be printed using cheap plastic printers.

If you have access to a color voxel printer then you can 3D print volume rendering directly, using images created by SlicerFab extension’s BitmapGenerator module.

Volume rendering does not generate a point cloud. It visualizes surfaces based on the intensity range and opacity values user specified in the transfer function. For STL and likes you need to use the segment editor and extract the surface you desire to keep. If single threshold doesnt define the structure you want to generate the model for, you will need to use the additional tools to make a cleaner segmentation.

Hi,

I just came across this discussion of seeing volume rendered in virtual reality.
For now I segment the heart and then see that in virtual reality.

  1. Can I do the same by simply converting 3D dataset into volume rendering and use in virtual reality.
  2. Can I load multiple phases and convert them into volume render model and see that cardiac motion in real time under virtual reality. If yes, how.
  3. I asked this another post as well. When I showed the virtual models to a surgeon, he had different requests. They are not that interested in seeing the inside of heart but rather fixing it. For that as I earlier asked, creating "accurate 3d patches for fixing holes, arterioplasties.

I know these are lot of questions, but would appreciate thoughts and help.

Thanks,
Sarv

There is nothing to convert, just go to the Volume Rendering module, show the volume you want, choose a fitting preset, and use the Shift slider to adjust the transfer function

You can show the proxy volume in volume rendering as I described above. When you’re playing the sequence, the volume will “animate”.

Please elaborate.

for now are you just using surface rendering to build the 3d model to see in VR?

what kind dataset do you have to convert into volume rendering?

cardiac motion image just one slice, right? not sure how you can render 3D from that.

Hi,
I did try using volume rendered CT to display as virtual reality but I was not able to do so for multiple phases. I had to shelf the project as I was unable to segment multiple phases of heart after multiple tries.

Sarv

A post was split to a new topic: Show volume rendered CT in VR

Hi,
Sorry for the silly question but I am totally new slicer and have been able to manage to make our own Neuronavigation, thanks for such an amazing software and community.

I was also having the same of exporting the volume rendering as a Model and cam across this amazing thread
This thread do discuss about exporting the transfer functions of volume rendering, but didn’t mention how to do that,
Can anyone please help me that.

Thanks
Nayan

Volume rendering is a visualization technique not a segmentation method; it does not create a 3D surface that can be exported as a model. To do that you need to use the Segment Editor and Segmentations module. See this image that explains different data representaiton in slicer and how they relate to each other

3 Likes

radiant can transfer volume rendering into stl file

I am not familiar with Radiant. A google search shows it as a DICOM viewer with limited capabilities (https://www.radiantviewer.com/).

While they didn’t list segmentation as feature. But perhaps it is available as a add-on of some sorts. Yes, you can save a 3D model as STL in Slicer too (after segmenting it). However, a 3D model is not created from volume rendering.

Radiant direct transfer volume rendering into stl file,so I think there must be a way in 3D slicer

The feature that people are looking for in this topic is the ability to automatically generate a colored surface model that looks like volume rendering. It is theoretically impossible to achieve this in general and RadiANT does not do this either. I guess RadiANT exports the usual noisy, monochrome isosurface to STL file, which is nowhere near what you can visualize with volume rendering. Maybe you could post a few links to example models or at least screenshots of those models.

If you use a surface mesh file format that can store full-color meshes (OBJ, PLY, …) then you could assemble nicer-looking surface models from many transparent layers, but it would not be usable for 3D-printing; and rendering of such semi-transparent models would be probably much more complicated and less efficient than direct volume rendering using raycasting. So, there is no incentive to implement generation of such models.

A post was split to a new topic: Dropping point on a rendered model

I am working on something that involves dropping a markup point on a rendered model. From the previous discussion in this thread, I realized volume model have to be segmented to obtain a surface model. So, a question I have is that, what algorithm is used in the markup extension when we drop a markup point on a rendered model? It seems to me the markup is automatically on the surface.

I assume a way to obtain a surface can be: first obtain a sufficient number of markups (maybe in background), then do Delaunay triangulation. I wonder if this is a valid assumption, or I am missing something important.

You can drop points on both volume rendering and and models. In both cases the first intersection of the view ray and the object is used as 3D position.

1 Like

If we can drop a markup by obtaining the intersection of the view ray and the object, I suppose obtaining the surface of the rendered volume is also possible. Does this imply that the “object” has certain boundary? I am a little confused from the discussion in this thread stl vs rendered volume where you answered:

Volume rendering displays a semi-transparent cloud, but for 3D printing you need hard boundaries that exactly define what is inside/outside

I think I am missing some background regarding this, so I appologize if this was a dumb question.

Appearance of a “surface” point in a volume rendering depends on what is behind, inside the volume. Therefore, position of the point that the volume raycaster picks depends on the view ray angle orientation. So, you can extract a nice colored surface patch from a single orientation.

However, if you then rotate the volume a little bit and extract another surface patch then you cannot assemble those two surface patches into a single mesh. The surface patches will intersect and/or not touch each other.

Maybe another example would help understanding the difference between surface rendering and volume rendering. Here is a jellyfish:

If you acquire an RGB volume of this jellyfish then you can generate an image exactly like this using volume rendering, from any angles.

However, using surface rendering what you could display would be equivalent to a solid plastic model of the jellyfish with some color pattern painted on the surface. You could paint it so that from one side it looks exactly as the semitransparent real colorful jellyfish, but there is no pattern that you can paint on the surface that would reproduce the appearance from all viewing angles.

Model from Radiant, the model is from the Radiant’s Volume Rendering




1634800365(1)