Meshing: Self-intersecting faces

Hello,

These question may be a little outside the realm of Slicer, but it never hurts to ask!

Our team has been using Cleaver to create some amazing multi-material, conforming meshes. However, we have found mesh quality is sometimes dependent on the un-even-ness of the volume. In other words, we get better results when the input/base volume (DICOM volume is isotropic in size and voxel-spacing).

  • Does anyone know why this happens?

We have also been trying to use tetgen, but it only seems to work with volumes that are fully enclosed. Otherwise, it throws a self-intersecting error.

  • Has anyone figure our a way around this issue?
  • Does anyone have any advice on methods for cleaning/correcting self-intersecting faces?
  • Is there a way in slicer to use the raw nrrd or a label-map as an input to tetgen?

Thank you!

Awesome! Could you share a few screenshots?

This is expected. If spacing along any axis is significantly more than spacing along other axes then essentially your voxels are shaped like sticks, so you cannot represent arbitrary 3D shapes with them. If spacing difference along any of the image axis is more than a few ten percent then I use “Crop volume” module to resample the input volume to have isotropic spacing (and crop it to the region of interest to keep the volume size small).

Tetgen is mainly added for reference, because 5-10 years ago it was a popular mesher and we wanted to make it easy to compare. It has restrictive license (you need to buy a license for commercial applications) and not robust (can crash quite easily for complex meshes that come out of medical image segmentation), so I would not recommend using it.

There are many other meshers, some of the popular ones include netgen, gmsh, TetWild (see this page for lots of more - not very up-to-date but gives an idea how many meshers are out there.). As far as I know, @Sam_Horvath plans to add some more meshers to SegmentMesher.

@lassoan,

I will get you some good screenshots by the end of the day for sure!

Regarding Cleaver, I have a few concerns;

  1. The Cleaver team does not seem too active and the doc. seems to be very limited

  2. We have issues with ‘holes’ in meshes with >3 materials, which seems kind-of lame

  3. It is not clear to me the dynamic between the scale and multiplier parameters. Could you provide some insight on the difference and use?

The SCI group is still very much active but it is true that not much development is going on for this particular project recently. My understanding is that they are still committed to maintaining their software and Kitware helps them in this, too. Latest release is 3 months old, so it is not too bad.

Can you show examples? You should be able to resolve this, but you may need to increase the resolution of the mesh near boundaries. Cleaver is intended for biomedical applications, so it may smoothen sharp edges, and where multiple materials meet, sharp edges are inevitable. Also check out how the input segmentation looked like. There might have been 1-2 empty voxels at the junction point.

See this page: GitHub - lassoan/SlicerSegmentMesher: Create volumetric mesh from segmentation using Cleaver2 or TetGen

The main difference is that scale essentially specifies how densely the input is sampled (if you take few samples from the mesh then execution will be faster but you will lose lots of details), while multiplier controls element size (if you have large elements then you will have a simpler mesh but cannot represent small details).

Lets say we have a boundary layer we are measuring at 2 mm, have does the value of multiplier and, thus, element size match to the units of the volume? Does a value of 0.2 mean I will have about 10 elements in that boundary?

I don’t remember these details (segmentation spacing may have an effect, too). The best is to experiment with this on simple examples and if you figure out something then share what you have found (we will add it to SegmentMesher documentation).

Will do, here is what that looks like :sweat_smile:

We usually just thicken the layer, but I am feeling like reducing the multiplier value to see if that helps!

Here is one of the highest quality meshes we generated today;

The mesh is about 13M tets, which I am not a fan of, but I need a solid starting point! There are 6 materials in the model. This is a breast cancer model, so the materials represent: chest cavity, soft tissue, fibro-glandular tissue, and masses/tumors. We get holes in areas/regions where 3 models intersect;

Now here is the interesting part. Although I started with the adaptive parameters (scale, multiplier, etc.), these results were obtained using the alpha flags the developers presented here

I will continue to explore adaptive meshing parameters to see if better results are obtained.

Any suggestions or information would be greatly appreciated

I guess you want to model breast deformation between imaging (prone) and surgery (supine) patient positions. These deformations are huge and with 13M elements it would be an extremely hard problem, not just because there are many elements (so computation time would be really long), but it would be very unstable (many of those millions of tiny elements would collapse and self-intersect if you want to get near realistic deformations).

You will need much larger elements, and some adaptive element sizing to have similar shapes as in the input segmentation.

Since you don’t have accurate parient-specific material properties anyway, you probably don’t need to worry about minor imperfections, such as slightly smoothed out boundaries or tiny holes at junction points.

1 Like

Agree 100! The problem is that as I start smoothing, holes begin to appear on the skin;
holes_on_skin

A balance needs to exist between the adaptive parameters, but I think it may come down to a case-to-case basis.

The adaptive parameters I used for the first adaptive mesh were;
--scale 1.00 --multiplier 2.00 --grading 5

Starting with an aggressive sigma blend, the the holes show;
--scale 1.00 --multiplier 2.00 --grading 5 --blend_sigma 2.50

I don’t think the meshing is that sensitive to input parameters that you would need tuning for each case. If you determine parameters for a specific image spacing then it should work well for all cases.

Blend-sigma controls strength of Gaussian smoothing applied to remove staircase artifacts. See documentation:

Sigma of Gaussian smoothing filter that is applied to the input labelmap to remove step artifacts (anti-aliasing). Higher values may shrink structures and remove small details.

You should use the smallest possible value that removes staircase artifacts. If you use higher than necessary values then structures will shrink and holes will appear between them. If you find that holes appear and staircase artifacts are still visible then resample the input volume to have smaller spacing (using Crop volume module) and redo the segmentation.

1 Like

I am going to keep this in my notes. Most of out data should end-up with about the same spacing.

We actually do a considerable amount of smoothing, so perhaps I should keep the blend sigma low;

I will keep trying parameters and I will post results!

1 Like

Note that you can always crop&resample your input to the desired resolution and size as part of your image import process. Therefore, you can use the same meshing protocol for all images, even if their native resolution were different.

@lassoan,

Apologies for the double-post. I wanted to add more info but I was unable to edit the response itself. Here is a more detailed explanation of the current challenge.

First, here is perhaps the most extreme model we have been able to simulate.



If I did not mention this earlier, we are using FEBio for the FEA and PostView for visualization.

The initial mesh was generated using the Segment Mesher module and Cleaver2. The most effective parameters were --scale 1.00 --multiplier 2.00 --grading 5.00 --blend_sigma 1.75. I would also note that we are now smoothing the compartments heavily prior to meshing, so our latest --blend_sigma parameters are small. The resultant mesh features about 1.1M tets and a minimum dihedral angle = 4.80, which is now the parameter we are tracking the most.

As is, using FEBio, we probably get about 20% of the simulation done before one or multiple tets invert (i.e. negative jacobian).

To solve this issue, we have approached the problem in two ways:

  1. Remesh (using tetgen) the failed model and continue simulating from the failed time-step
  2. Remesh (using tetgen) the original mesh and restart simulation

Thus far, we have had success in both approaches. The model depicted above still fails due to penetrating tets, which we will address differently.

Our questions are;

  1. Can we export an .ele and .node (or other mesh extensions) from segment mesher? –This would streamline the re-meshing process
  2. Are there other means of remeshing using Segment Mesher and/or Cleaver?