I used Elastix (in Matlab) to register two images and to get the displacement field. I can import this displacement field in Slicer as a Transform. Now I have two questions:
It seems that Slicer then automatically computes and imports the inverted transform. Is this true, and if so, why does it do this?
In the Transform module you can easily invert a transform with the ‘invert’ option. I am wondering how this works, because I understood that it can be quite challenging to invert a nonlinear displacement field, and often is an iterative process. But Slicer can do this within a few seconds, so I am wondering if someone could explain to me (the basics of) how this inverted transform is being computed.
Thank you.
Janneke
P.S. I already read the following: Inverting Elastix transform So I undertand that the fixed image is the ‘parent’ and the moving image is the ‘child’, and that elastix computes the transform from parent (fixed) to child (moving). Thus, the inverted transform is again from child (moving) to parent (fixed).
Slicer does not invert the transform. ITK computes the inverse transform, because this is what you need for transforming images (resampling transform).
It may sound strange first, but for transforming objects between two coordinate systems, you need different transform, depending on what kind of data you work with. For transforming surface meshes, points, curves, etc. you need the forward (modeling) transform, while for transforming images you need the inverse (resampling) transform.
Slicer needs to transform various objects, so it always needs forward and inverse transform for all transforms (including non-linear transforms, even if many of them are chained together). Therefore, we implemented transform nodes in Slicer so that they can store both “to parent” and “from parent” transforms and if only one is specified then compute the other dynamically. Dynamic inverse computation of composite non-linear transformation chains is implemented in VTK by David Gobbi -you can find more details about the used numerical methods in this paper.
If you work in an environment where VTK transforms are not available, only ITK, then the best you can do is to compute static inverse transforms over some predefined volume, which is of course very slow and very inflexible.
Thank you for the information! This helps me out a lot.
I understand that ITK computes the inverse transform (from fixed to moving) because of image resampling. But when I import this ‘inverse’ transform in Slicer, this transform seems to be automatically inverted again (to get the transform from moving to fixed). Then I can directly apply this transform to e.g. a mesh in the moving space.
So I was wondering why Slicer automatically inverts the transform back. When I check the transform after importing it says; “Transform to parent: Computed by inverting the transform from parent”.
As I understand it now Slicer automatically thus computes the inverse because it always wants “to parent” and “from parent” transforms. And as I understood from the paper it computes the inverse with an iterative process (Newton’s method). So it does this immediately during importing the transform, within a few seconds?
When Slicer reads a grid transform then it assumes that it is for transforming images (as most likely it was created by image registration) therefore it marks it as “from parent”.
Inversion is performed on-the-fly, whenever is needed, on specific data sets. For example, if you apply a non-linear transform on a volume that is displayed in a slice viewer then we don’t invert the entire volume (it would be very slow), but we set it in the image reslice filter that extracts the slice. Or, if you want to transform landmarks, it only needs to compute the inverse at those landmark locations, making computation time about 6 magnitudes faster (and a bit more accurate) than than inverting an entire field and applying that to the mesh.