How to increase the size of mesh triangles


For a specific purpose I need to coarsen the mesh or increase the area of mesh triangles.
I appreciate if anyone help me

You can use “Decimation” option in “Surface toolbox” module for this.

You can include decimation step in segmentation’s automatic binary labelmap to closed surface converter: in Segmentations module / Representation section, click “Update” button next to “Closed surface”, click on the “Binary labelmap -> Closed surface” conversion path, and adjust “Decimation factor” parameter.

If you want to simplify/decimate an existing mesh, there are a number of tools that can help you out. Alec Jacobson has a nice page that describes several algorithms (mostly for Matlab). The two big ideas are whether you want an adaptive method (where original shape is preserved by selectively deleting vertices in flat areas) or not (which can have benefits). The other (related) choice is whether surviving vertices maintain their original location, or whether vertex collapse interpolates the position of the surviving vertex to best maintain original shape.

  • Python code exists for these tasks.

  • The graphical interface of MeshLab has a whole range of options in Filters/Remeshing,Simplification,Reconstruction that you can tune for your uses.

  • Sven Forstmann has an elegant implementation of Surface Simplification Using Quadric Error Metrics that is provided as a standalone executable but been ported to many languages (C, C#, Pascal, Java, JavaScript).

  • Surfice includes an graphical interface of Sven’s algorithm. You can drag and drop a mesh to open it, choose Advanced/Simplify to reduce it, and Advanced/Save to save the result. The image below shows an original mesh created with Slicer (left) after 90% of the vertices have been removed (right).


I have personally found that if you are using STL files then it is best to do the decimation in another program such as blender or mesh mixer rather than use surface toolbox.

I have found the decimation tools in blender for example, give more even looking, aesthetically pleasing mesh upon decimation while still keeping all the fine details. The decimation in surface toolbox seems to give very uneven triangles (huge is some areas and really tiny in others) especially with large reductions in triangle count. I would encourage you to make the comparison yourself.

You should really try and limit what you do to the model in a post processing program such as blender though as you have no way of checking the model against the original scan as you do in slicer. as long as you dont move the models position in blender you should be able to re import it back into slicer to check it over top of the original scan. Always ensure that your model still accurately represents the anatomy when making any changes.

VTK’s decimation filter preserve original mesh points, does not do any remeshing, so it is mainly used for very high fidelity reduction of the dense mesh that marching cubes or flying edge method generates from labelmaps. If you need more than about 60-80% reduction then you probably need to remesh.

It would be relatively easy to make available mesh simplification filters in Slicer (for example, some of those @Chris_Rorden listed above), but I don’t know very strong use cases. Why do you need mesh simplification with remeshing? Can you describe your application and workflow?

I use Slicer and Blender together to model surgical guides and surgical implants.

First we segment the bones in the region of interest from a CT scan and save it as an STL. The STL is then opened in Blender. The first thing we usually do when opening the STL file in Blender is decimate the mesh. The triangle count is often around 3 million tris for a skull model which is straight out of Slicer. This makes for very slow performance on our computers and a really un-necessarily high resolution mesh. We usually decimate the mesh so it has around 10-20% of the original triangle count. The decimation on blender seems to re-mesh the model as well and gives a nice even triangle distribution and doesn’t seem to loose too much fine detail.

We then model a surgical guide or implant around the bone anatomy and usually use a boolean subtraction (with some clearance) so that the model fits onto the bone well. It is also good to have coarse mesh so that the face(s) on the model which result from the boolean subtraction do not also have tiny mesh faces, making them hard to edit (if needed).

The implants or guides are then either 3D printed or CNC machined.

A decimation tool (which also re-meshes) in slicer would be useful as it would allow us to check the model while it is still overlaid on top of the CT scan for any discrepancy. It would also save exporting a huge STL file and having the initial sluggishness of loading it up in blender and decimating it there.

You can smooth segments in Segment Editor (Smoothing effect). You can also create molds, invert geometry, combine it with CAD models using Logical operators effect. Do you need to use Blender for these?

3D models exported from these segmentations are indeed very dense, but 3D printers and CNC machines should be able to deal with these. Do you find that large models cause problems?

Actually I used Surface toolbox and Meshmixer and the result of both of them was satisfying.
The reason that I needed a lighter version of the mesh was to reduce the computation time of the program that I am using to fit a model.