How to export the Volume Rendering into models?

Hello,everyone!
How to explore the result of Volume Rendering into models?

:-1)In scientific visualization and computer graphics, volume
rendering
is a set of techniques used to display a
2D projection of a 3D discretely sampled data set, typically a 3D scalar
field.
2)so what you will see in models is just the 3D image.
3) what else you want to know you should specify and according to
that other modules are available
according to me but i am also new user

Screenshot
The result of CTA with Volume Rendering is much better than any other modules,but the 3D view can not be saved solely.
What i want to know is how to save the 3D view in the proper format,so we can edit the result just like models.

after volume rendering use segmentation>export(the last option)> there are
some format

Volume rendering is a visualization technique. No data is generated that could be exported to models (surface meshes). See detailed explanation in this topic:

My understanding is that volumetric rendering is essentially a function (often specific to a tissue and/or imaging modality) which, given a 3D volume (let’s say just a CT volume with voxels with intensity values from 0 to 1), assigns color or intensity/magnitude values and opacity values to each voxel. It is therefore different data than the original 3D volume (although it could be instantly re-created by combining the original volume and the function). When I use the word function I am referring to the display preset in Slicer such as “MR-Default”.

I understand that volume data is inherently different from a surface mesh. However, it would still be useful to be able to export the volumetric rendering as a 3D matrix of intensity values (or color values) with associated opacity values. This would be tremendously useful for a variety of reasons which is why this topic seems to be so popular. I think that many folks are seeing some potential in the Slicer volumetric segmentation module and they want to export some of this secret sauce for use in their own applications/projects.

Is there any way to do this using Slicer?

1 Like

By burning in looked-up RGBA values into the voxel array you would remove a trivial look-up step, but you would make the data unusable for rendering, as you would lose ability to estimate surface normal from voxel gradient. You could save gradient in a separate volume but then you would end up with a more complicated rendering task overall, increased memory need (about 5x more), and complete inflexibility in how you display your data. This does not seem useful to me and that’s why this kind of data export is not implemented in Slicer (or elsewhere).

Yes, probably exporting the volume after applying the transfer function would not be too useful.

But maybe we could use some information from volume rendering for simple segmentation. How about allowing the user to specify the desired “color” for thresholding? We could determine the original voxel values from that color (using the transfer function) and a tolerance factor maybe, and do a thresholding to include that interval.

1 Like

I think live preview in slice views in Threshold effect already makes threshold setting quite easy. With multi-volume rendering (that has just been merged into VTK master today) we will be able to show a live preview in 3D views, too.

1 Like

4 posts were split to a new topic: Convert model to volume rendering

This is an automatic translation from google:
I used to worry about not being able to export STL, because I was mainly used in 3d printing. It doesn’t bother me anymore. If I just want to display a certain DICOM CT sequence, I use python + VTK to display directly, it will be very fast. If it is a general application for 3d printing, I will use python + ITK + VTK to directly export the STL file from the image. If it is for web display, in fact webgl2 now supports 3D rendering. Directly loading files in NRRD or other formats can render as beautiful as slicer.

谢博士:我曾经也一直在为不能导出STL而烦恼,因为我主要用于3d打印领域。后面也不困扰了。如果我只是要显示某个DICOM CT序列的话,我直接用python + VTK来进行显示,会非常快。如果是为了3d打印的常规应用,我会用python+ITK+VTK来直接根据影像导出STL文件。如果是为了web显示,实际上现在webgl2支持三维渲染啊,直接载入NRRD或者其他格式的文件,可以渲染得和slicer一样漂亮。

What you write is true if you only work with CT images and you are only interested in bones.

3d slicer is a good medical image pre-processing software, but not everyone will operate 3d slicer, so after segmenting the image, I use the “mask volume” method of the “Segment Editor” module to fill other valves Value, then export to “nrrd” format, and finally use vtk.js or three.js for web rendering, you can get a beautiful image.
It’s not just CT. But I usually use CT.
I give an example:
This time depends on the condition of the lungs

  1. Use the 3D slicer’s “Chest Imaging Platform (CIP)” tool to match the approximate location of the lung model.
  2. “margin” this Segment
  3. “mask volume” method fills in other thresholds.
  4. Export as nrrd.
  5. According to the “CT-Air” rendering value, add the color and CT value of the infected part.
  6. Render in python + vtk. Of course I can also render in webgl + vtk.js.

Of course, now I have made a small plug-in for 3d slicer to complete the above steps 1-5 fully automatically. I did the above steps just to separate out the air outside the body. Otherwise, it looks like this:

After processing and segmentation, it looks like this:

So it’s not just CT and bones.
Sorry, I ’m Chinese, I do n’t speak English well, I use google translation,

I meant that there are very few things that you can easily visualize directly in native images. If you segment structures in Slicer then you can certainly use simple viewers to visualize the exported results (segmentations, masked volumes, etc.).

Having simple web-based viewers make sense and there are lots of them. For medical imaging, OHIF viewer is particularly good.

1 Like

yes,you are right。
It seems that there is a character limit in posting, so I have to say another word