Hi, I could create a incenter subdivision algorithm. It creates 3 triangles per each one on the input data, each one of those triangles will include the incenter point and one side of the original triangle.
If someone thinks is useful I could make it available inside the Dynamic Modeler Subdivide tool. Please see picture below for before and after subdivision (left and right sides of the screenshot correspondingly)
I thought this could be useful to improve the WrapSolidify algorithm because it creates a new point in the middle of the earlier triangle (not over the already existing edges as vtkLinearSubdivision) because of this maybe the new point could intrude into a hole on the mesh to be wrapped
Ok. So, after working a bit, on this I realized I needed a remeshing algorithm not a subdivision algorithm. I did implement incenterRemeshing but it was not useful to improve the wrap solidy effect. Then I realized what I needed was isotropic remeshing. I was able to implement it.
I would be glad if someone tests this algorithm and gives feedback. See demo picture below:
Thanks for sharing, this looks interesting. Overall, looks similar to vtkSmoothPolyDataFilter: it keeps mesh topology the same, iteratively relaxes points to make the mesh more uniform. It may be possible that you can make the convergence significantly faster by tweaking the point position update rules.
Could you do some side-by-side comparison of vtkIsotropicRemeshing and vtkSmoothPolyDataFilter? In vtkSmoothPolyDataFilter set the input surface both as InputData and SourceData and set the number of iterations high. Setting the source data makes sure points move tangentially (on the surface), completely eliminating any shrinking. Setting the number of iteration high ensures that we can see the highest quality result that the filter can provide; later you can play with reducing the number of iterations and increasing the relaxation factor to speed up the processing. If you find that your update rule works much better then it could be added as an option to vtkSmoothPolyDataFilter instead of adding a completely new filter. It would reduce the amount of added code and it would allow combining your algorithm with all the features of vtkSmoothPolyDataFilter (constraining to source data, feature edges, boundary smoothing, error metric computations, double/float support, etc.).
It might be interesting to compare with vtkConstrainedSmoothingFilter, too. The filter is nice because it is highly parallelized and it allows setting custom smoothing constraints per point.
For wrap solidify: If you have concavities in the input segment then for a fair comparison you need to enable “Carve holes” (the hole size is similar to the hole size in the vtkFillHolesFilter). Please compare again with this option.
Making the mesh more uniform by a smoothing filter (your filter or vtkSmoothPolyDataFilter) instead of remeshing by converting to labelmap image could be a good idea. As I remember, we tried very similar ideas with you a few years ago and somehow the results were not better, but maybe we did not try this exactly. More testing would be useful.
Could you do some side-by-side comparison of vtkIsotropicRemeshing and vtkSmoothPolyDataFilter? In vtkSmoothPolyDataFilter set the input surface both as InputData and SourceData and set the number of iterations high.
I did this, apparently, smoothFilter solves only degenerate triangles but it does not make triangle edges have a more or less uniform length. See pictures below:
In vtkSmoothPolyDataFilter eventually produces a uniform mesh, but you may need to set the number of iterations higher (e.g., try 10000) and/or increase the relaxation factor (try 2x, 5x, 10x, … larger than default), maybe decrease convergence limit. Adding an option to make vtkSmoothPolyDataFilter explicitly favor uniform triangle size could speed this up significantly (as currently uniform triangle size is more of a byproduct than a primary goal). I would recommend to improve vtkSmoothPolyDataFilter instead of introducing a new filter, because it would significantly reduce implementation and testing workload and would eliminate long-term maintenance efforts (VTK core developers would do it).
I would recommend to improve vtkSmoothPolyDataFilter instead of introducing a new filter, because it would significantly reduce implementation and testing workload and would eliminate long-term maintenance efforts (VTK core developers would do it).
I added a new mode as you suggested but the movement of the points decreases asymptotically with the number of iterations so the output mesh never ends with very low variance of the edge length of the triangles
I created a new DynamicModeler tool for remeshing (ideally it could later be transitioned to a shrink-wrap tool) the code is fast enough on C++ and the license is commercial friendly (see gpytoolbox)
The result looks nice! Does it actually do remeshing or it keeps the topology of the mesh (subdivides and collapses triangles) and just moves the mesh points?
I’ve had a look at the implementation. It is good that you build the libigl project as is, instead of vendorizing it (it allows us to get fixes and improvements in the future). However, for consistency with how all the other extensions use external libraries and since other extensions may also use libigl project, we need to follow the conventions on how third-party libraries are built in extensions. Unfortunately, SurfaceToolbox is not a regular extension and it is bundled with 3D Slicer core by default, so making it a superbuild-type may not be trivial. For this reason, and also because libigl is not a small project and it could have non-trivial dependencies, it may be easier to get it integrated via an extension. Maybe a new 'libigl" utility extension could be added for this.
Even if we use a single method from the library, we need to make it a superbuild-type extension. But then the extension may be called uniform remeshing extension or something like that. It could have a more generic name if we think we will add more methods from libigl or elsewhere.
If you find this too much work then vendorizing (copying from the library) could be an option, but then it becomes very difficult to get future updates.
I could build UniformRemesh but I had to comment out testing in UniformRemesh/CMakeLists.txt.
Processing an already uniform mesh completed in a breeze with no apparent modifications when viewed as wireframe, and with no change in the number of points and the number of cells.
However, with this non-uniform mesh, it keeps running and I stopped at one hour. May be you could check the mesh and suggest a few tips. It’s only for testing, with all default parameters.
These few days I tried to make a new version of the same remeshing algorithm (i.e. Botsch-Kobbelt) that only depends on Eigen library, while I was able to make it partially work I did not get any good result, good performance or robustness for complex input meshes. One of the road blockers was that even trying to copy all auxiliary classes from remesh_botsch some of the classes depended on libigl components that were optimized like libigl::decimate
My Eigen implementation was at least 10 times (or more) slower than using gpytoolbox::remesh_botsch and the output delivered unexpected holes and self-intersections.
We have 2 options if we still want this algorithm to be available on Slicer:
use the version that goes packaged in its own extension (SlicerUniformRemesh)
Even with these fails, it was fun to compile these libraries, integrate them to Slicer in different ways, understand a bit more CMake configuration, how to create and register tests, how to use gdb to debug, etc. So overall it was a great learning experience
@chir.set I tested the data you provided, even the original gpytoolbox::remesh_botsch version fails with the data you provided, I don’t know how to solve it. But if you really need to remesh it, there is already a tool for this on the SurfaceToolbox that exposes ACVD remesher and your file could be remeshed using it.
By the way, if we want to integrate the ACVD algorithm in Slicer’s C++ code, we may consider using one of these implementations from MITK (as far as I can understand the license is commercial friendly):