I use the TextureModel module from IGT to texturize the surface scans. I tried saving them as OBJ hoping that the textures were remembered next time, but it didn’t. What is the proper way of saving these models with texture, so that I don’t have to use TextureModel each time?
I don’t think there’s a good way to do that currently. Texture was added to models for the slice display in the 3D view, but wasn’t exposed in the UI. Full texture support is a complex topic, but probably some basic reloading capability could be added to the reader/writer code.
PLY, VTK, and VTP formats all preserve colored texture of meshes (in point scalars).
You can copy the color information from textured OBJ to point scalars by using Texture Model module by setting “Save color information as point data” → “single vector”.
I did that, and then I saved the textured model as PLY. When I reload that into Slicer, it still didn’t display the textures. Am I msising a step?
If you use the stable release then you need manually select the color scalar in Models module. This happens all automatically in Slicer Preview Releases.
Same for me with the preview. Under the scalar, there is something called Tcoords. But colormaps are weird. Nothing close to the original texture:
Textured:
Saved and Reloded:
After “Texture model” module saved the color information into the model, do the color scalars show up in Models module?
Immediately after texturizing, there is an additional scalar called Color:
But by default it is not enabled, and when I enable it, colors look weird again:
It looks perfect, you just need to choose “direct color mapping” as scalar range mode if you don’t want to map the colors through a color table.
Slicer Stable Release cannot save point scalars into PLY, you either need to use VTK/VTP format or switch to the latest Slicer Preview Release.
It is still quite not the same color as the texturized model, but at least acceptable.
By the way I am doing all of this in preview. So not saving the PLY or not showing the Color scalar is all happening with the preview from 10/14 (on windows).
I can confirm that saving to PLY does not work as intended in the preview. VTK works fine, I can render the texture from the loaded model. Alhtough I still have to manually activate the scalar (i.e., it is not automatic).
The model appearance should look very similar, but you may adjust material properties (Models module / Display / 3D Display / Advanced) to get the same brightness.
I’ve checked the Texture Model module and it saves the colors as a vtkDoubleArray (with values between 0-1), which is the VTK convention. However, colors are most commonly stored in vtkUnsignedCharArray, and that’s what Slicer recognizes automatically as colored mesh when a model is loaded.
I’ve added an option to Texture Model module to save as vtkUnsignedCharArray: “Save color information point data” → “RGB vector”. Such models will be loaded automatically in full color.
A couple things about my experience with textures. Pardon my very technical language.
When texture is “applied” to the model, it does not color vertices/polygons with the colors - it maps the 2D picture to the model based on tex coords, so you get a very high quality looking mesh. This is what most mesh programs do and my lab likes to use in annotations.
When you transfer the texture as scalars to the mesh, then you lose the high resolution looking texture, now there is no 2D pic to “stretch” over the model, you see vertices/edges/polygons painted with the colors calculated from the original texture imaged and interpolated. I tried to go this route but my people complained about this saying that it is not the same quality as the camera software (3dMD or Canfield).
Another tricky thing - which happened to your mesh- is the stitching effect. This is the result of cleaning the mesh. Often, there are floating vertices edges etc that contributes to the mapping of the texture to the mesh. When you clean, you remove those vertices/edges and get that white region. I couldn’t find a good solution to this, so we keep using uncleaned original meshes with original textures. That gives the best visualization.
Yes, very often the mesh resolution is higher than the resolution of the mesh, so you lose details when you just sample at mesh points, which may become visible when you zoom in:
Storing the information that which image should be applied to which node would be quite straightforward and we already have a ticket to track this request:
Merging/blending textures is indeed tricky, especially if you have overlapping texture images acquired with slightly varying light conditions. Your 3D scanning software should have options for merging multiple textures either during scanning or as a post-processing step (the software that came with our Artec scanners had them) and then you don’t need to deal with the issue during rendering. I’m sure that other software, such as Blender has many tools for blending textures, too.
Questions about seams in multi-texture OBJs have come up a couple of times on VTK discourse and there are solutions (adjusting rendering settings, maybe some small processing with Python scripting). With a very quick search these two hits have came up (but there are at least a few more):
I’ve seen these posts and tried everything recommended. Setting interpolation, edge trimming etc. The only thing that worked was to use original mesh and original textures without any preprocesing.
I’ve made some changes to the TextureModel module for it to work with multiple images - I believe I had a pull request open for that.
I am not sure if this mesh was cleaned or not. It was a very old dataset (almost 10 years ago) from the 4 pod 3DMD system. It is possible that 3DMD does not export data sufficiently well. But it it will good to have the stretching available in slicer, but on closer inspection (long time ago), I recall there being some discrepancies between how the pictures is stretched versus underling mesh (as I recall edges of the eyelid - palpebral fissure- were not exactly a 1-1 match between mesh and photo).