Hi all, I found really easy to export models as obj.
The surface textures of these files is the same as represented in the 3D viewer:
I was wondering if it’s possible to add a different texture: the one you see represented in the 3D view:
I saw this topic was already covered:
I still hope there’s the practical possibility to map results from LUT to mesh directly in 3Dslicer.
All the answers in the referenced posts still apply. You cannot use a surface renderer for volume rendering.
But bitmap textures are used in mesh rendering.
If I can map a 2D bitmap to a 3D mesh, why I should not be able to map a grayscale or a LUT?
I’m ignorant in the mechanism of texture mapping, I just used it in Rhino, and saw some examples in Meshlab.
I know that surfaces are going through voxels, and extracted meshes are made by triangles connecting points that does not coincide with voxel centroids or vertexes.
What if you can directly map a volume rendered grayvalue to the mesh?
Sorry but I did not understood properly the answers in the referenced old posts.
You can easily display bitmap/grayscale/LUT mapped to a surface mesh. Inside is hollow.
However, volume rendering uses a volumetric mesh. Inside is filled with a cloud of colored semi-transparent voxels. Similar to having hundreds of layers of planar surface meshes.
That’s what I need.
A painted surface shell
Of course if my object is a shell in itself it will be painted also “inside”, in the outer side of the inner surface.
So there is a workflow in slicer for mapping grayscale/LUT on meshes.
Would you like to help me in this?
As Andras noted, voxel-based volume rendering and surface-based mesh rendering are useful for different things. I like to think volume rendering is nice when the object surface is translucent while surface rendering works when the surface is opaque. A cloud in the sky lends itself to volume rendering: there is no single point where you are “inside” or “outside” the cloud, as the border is a gentle gradient of increasing opacity. On the other hand, an airplane wing has a discrete border between inside and outside. Outside medical imaging surface rendering is popular as complexity scales with surface area (which scales by the square) while volume rendering scales with the cube.
While we usually thinking of a brain as having a hard boundary, soft tissue is actually translucent. When we blush the color of our face changes due to subsurface effects of the blood.
Before investing too much time on this, I would experiment with existing tools. This will help you decide if this is really the approach you want to use and help you refine your algorithm. The volume below (left) is a volume rendering, while the surface rendering on the right was created using the Matlab based MRIcroS. The algorithm used by MRIcroS blurs the voxel data to estimate surface color based on deeper tissue. Note you can see the lesion in both images.
An important consideration is that the surface color in a surface mesh is locked onto the surface: a vertex that is black will remain black regardless of your viewpoint. On the other hand, with volume rendering the surface color at one location on the surface is dependent on what is beneath that voxel from your line of sight. As an analogy, consider a strawberry inside translucent jello - the surface location where the strawberry appears will shift as you change your viewpoint.
I think that both volume rendering and surface rendering have their place in visualization. Choose the correct tool for your application.
I work with wooden objects.
A Volume rendering (i.e. phong) representation of a cube made of wood it’s nice to see since you “see” both the cube surface and the local variation in density of the material (the growth “rings”, if present, or smaller anatomical features at smaller voxels). Just think about an unvarnished wooden plane. Not as translucent as skin.
Of course the surface (as in the idea of the cloud example) is vague and not clearly defined as in a rendered model, that’s calculated (interpolated) across the voxels.
Also the examples of my images show it.
I want to create an empty cube delimited by a polygonal mesh with no information on what’s inside, but with a texture attached to it.
If the object it’s a sculpure, it would have a precise surface definition (given by stl) and a texture that’s informative about the material density.
I think it’s theoretically possibile to calculate the grayvalue at a point in the surface (as an average between a certain number of neighbouring voxels) and then map it to the surface.
I do not expect this to behave as nicely as in the volume renderer!
At least it’s better than monochrome or an artificially attached texture.
What I can do now is to:
-save a jpg with LUT from the volume viewer (say: front view of the sculpture)
-create and export the stl
-map the jpg to the stl in an external CAD software.
that’s painful (for me) and not with a good control at all.
Hi @mading -
Thanks for the explanation. You probably want the ProbeVolumeWithModel module. This will create create a scalar value per vertex based on the contents of the volume. With a densely sampled model this is very similar to a texture, but you don’t have to worry about trying to map the volume data to 2D textures and determine texture coordinates.
I gave a try to this module.
-Imported a DICOM serie
-cropped to speed up
-segmented with thresold in order to have the model a little bit smaller than the voxel “cloud”
-created the model
I got some strange behaviour:
output from Probe:
In Model Module I tried to change something randomly, and I have found that is not a good idea.
Maybe you are close, but there are a couple things. One is to turn off the visibility of the other models and segmentations. Then after probing the volume you need to be sure to show the scalars with the right color table. In this case I used the MRHead and the gray color table.
Many Thanks pieper.
I gave a try to a medical dataset, and it behaves better:
Then I saved the obj and here the texture is missing:
Thanks for the help, once that I fixed this it will be very helpful.
You’ll need to confirm that whatever other software you use is able to read and display the vertex values correctly. Again, they won’t be a texture map (image). They should export in a .vtk file but I’m not sure how other programs will interpret that data.
So if I want the scalar together with the mesh I need vtk.
I can’t manage vtk in Rhino. Not at my level.
Is there a way for converting vtk to obj directly in Slicer?
Hi - You can try obj export and see if the surface scalars are preserved. Or you can try ply, since this has worked for fibers exported to blender in the past. I have no idea about Rhino.
I’m looking forward to find a way for importing coloured vertices in Rhino.
Best would be to convert it into surface texture!
For now I’m happy I see it’s possible to create a model with scalars.