I have a reference mesh and a deformed version of it (they are unstrcuted grid saved in vtk file format). The reference mesh was acquired from a 3D-CT volume.
I would want to apply the same deformation to the reference 3D-CT volume so that it align with the deformed mesh configuration.
How can I achieve this with 3D slicer? Any help would be appreciated (It would be greate if you could provide steps)
I have herewith shared a sample reference mesh, deformed mesh and corresponding reference CT volume in the link given below for your further consideration.
For that you need the deformation field applied to the reference mesh. If you have that, you might be able to import it into slicer as a Transform, and then apply that transformation to the CT have. It boils down to what software you created the deformation and whether you can export the transform in a format slicer understands…
The difficulty is that you only know displacements at mesh points, but you would need the displacement at the position of every image voxel. Slicer has two tools that you can use for this:
- Option A: You can use the known point pair positions in a thin-plate spline transform, but if there are many points then the computation can be slow and if the distance between the points is varying and the displacement field is complex then there could be instabilities.
- Option B: Use the ScatteredTransform extension to reconstruct a bspline transform from the sparse point set. This may not reproduce the exact displacement but it should be much faster and more robust.
It would be a much better (more accurate, more robust solution) to save the full transform when the mesh registration software aligns the meshes (instead of just saving the displacements at the mesh points). If your current registration software does not support this then you can do this for example in Slicer’s SegmentRegistration extension (it supports rigid and warping registration between meshes).
I just used the SegmentRegistration module to obtain the full transform, however it will take much longer time to do the computation. Any reason for that ?
I started by converting two meshes into segmentation nodes. Then, as the fixed segmentation, the deformed mesh segmentation node was utilised, and as the moving segmentation, the reference mesh segmentation node was used. Following that, I performed the registration to acquire corresponding full transform. (Here did not set the fixed and moving images just only the fixed and moving segmentation nodes. See the screenshot attached).
I also intended to apply this transform to the reference 3D-CT volume. How can I accomplish this?
If feasible, could you kindly specify any actions/steps to take ?