These question may be a little outside the realm of Slicer, but it never hurts to ask!
Our team has been using Cleaver to create some amazing multi-material, conforming meshes. However, we have found mesh quality is sometimes dependent on the un-even-ness of the volume. In other words, we get better results when the input/base volume (DICOM volume is isotropic in size and voxel-spacing).
Does anyone know why this happens?
We have also been trying to use tetgen, but it only seems to work with volumes that are fully enclosed. Otherwise, it throws a self-intersecting error.
Has anyone figure our a way around this issue?
Does anyone have any advice on methods for cleaning/correcting self-intersecting faces?
Is there a way in slicer to use the raw nrrd or a label-map as an input to tetgen?
This is expected. If spacing along any axis is significantly more than spacing along other axes then essentially your voxels are shaped like sticks, so you cannot represent arbitrary 3D shapes with them. If spacing difference along any of the image axis is more than a few ten percent then I use âCrop volumeâ module to resample the input volume to have isotropic spacing (and crop it to the region of interest to keep the volume size small).
Tetgen is mainly added for reference, because 5-10 years ago it was a popular mesher and we wanted to make it easy to compare. It has restrictive license (you need to buy a license for commercial applications) and not robust (can crash quite easily for complex meshes that come out of medical image segmentation), so I would not recommend using it.
There are many other meshers, some of the popular ones include netgen, gmsh, TetWild (see this page for lots of more - not very up-to-date but gives an idea how many meshers are out there.). As far as I know, @Sam_Horvath plans to add some more meshers to SegmentMesher.
The SCI group is still very much active but it is true that not much development is going on for this particular project recently. My understanding is that they are still committed to maintaining their software and Kitware helps them in this, too. Latest release is 3 months old, so it is not too bad.
Can you show examples? You should be able to resolve this, but you may need to increase the resolution of the mesh near boundaries. Cleaver is intended for biomedical applications, so it may smoothen sharp edges, and where multiple materials meet, sharp edges are inevitable. Also check out how the input segmentation looked like. There might have been 1-2 empty voxels at the junction point.
The main difference is that scale essentially specifies how densely the input is sampled (if you take few samples from the mesh then execution will be faster but you will lose lots of details), while multiplier controls element size (if you have large elements then you will have a simpler mesh but cannot represent small details).
Lets say we have a boundary layer we are measuring at 2 mm, have does the value of multiplier and, thus, element size match to the units of the volume? Does a value of 0.2 mean I will have about 10 elements in that boundary?
I donât remember these details (segmentation spacing may have an effect, too). The best is to experiment with this on simple examples and if you figure out something then share what you have found (we will add it to SegmentMesher documentation).
The mesh is about 13M tets, which I am not a fan of, but I need a solid starting point! There are 6 materials in the model. This is a breast cancer model, so the materials represent: chest cavity, soft tissue, fibro-glandular tissue, and masses/tumors. We get holes in areas/regions where 3 models intersect;
Now here is the interesting part. Although I started with the adaptive parameters (scale, multiplier, etc.), these results were obtained using the alpha flags the developers presented here
I will continue to explore adaptive meshing parameters to see if better results are obtained.
Any suggestions or information would be greatly appreciated
I guess you want to model breast deformation between imaging (prone) and surgery (supine) patient positions. These deformations are huge and with 13M elements it would be an extremely hard problem, not just because there are many elements (so computation time would be really long), but it would be very unstable (many of those millions of tiny elements would collapse and self-intersect if you want to get near realistic deformations).
You will need much larger elements, and some adaptive element sizing to have similar shapes as in the input segmentation.
Since you donât have accurate parient-specific material properties anyway, you probably donât need to worry about minor imperfections, such as slightly smoothed out boundaries or tiny holes at junction points.
I donât think the meshing is that sensitive to input parameters that you would need tuning for each case. If you determine parameters for a specific image spacing then it should work well for all cases.
Blend-sigma controls strength of Gaussian smoothing applied to remove staircase artifacts. See documentation:
Sigma of Gaussian smoothing filter that is applied to the input labelmap to remove step artifacts (anti-aliasing). Higher values may shrink structures and remove small details.
You should use the smallest possible value that removes staircase artifacts. If you use higher than necessary values then structures will shrink and holes will appear between them. If you find that holes appear and staircase artifacts are still visible then resample the input volume to have smaller spacing (using Crop volume module) and redo the segmentation.
Note that you can always crop&resample your input to the desired resolution and size as part of your image import process. Therefore, you can use the same meshing protocol for all images, even if their native resolution were different.
Apologies for the double-post. I wanted to add more info but I was unable to edit the response itself. Here is a more detailed explanation of the current challenge.
First, here is perhaps the most extreme model we have been able to simulate.
If I did not mention this earlier, we are using FEBio for the FEA and PostView for visualization.
The initial mesh was generated using the Segment Mesher module and Cleaver2. The most effective parameters were --scale 1.00 --multiplier 2.00 --grading 5.00 --blend_sigma 1.75. I would also note that we are now smoothing the compartments heavily prior to meshing, so our latest --blend_sigma parameters are small. The resultant mesh features about 1.1M tets and a minimum dihedral angle = 4.80, which is now the parameter we are tracking the most.
As is, using FEBio, we probably get about 20% of the simulation done before one or multiple tets invert (i.e. negative jacobian).
To solve this issue, we have approached the problem in two ways:
Remesh (using tetgen) the failed model and continue simulating from the failed time-step
Remesh (using tetgen) the original mesh and restart simulation
Thus far, we have had success in both approaches. The model depicted above still fails due to penetrating tets, which we will address differently.
Our questions are;
Can we export an .ele and .node (or other mesh extensions) from segment mesher?âThis would streamline the re-meshing process
Are there other means of remeshing using Segment Mesher and/or Cleaver?