Recovering or creating a large nonlinear inverse transform from an ANTs registration

I was recently trying to understand what was happening under the hood in a similar situation, see this post for some potentially helpful information: Interpreting ANTs registration warp field directionality - #4 by mikebind

Note that I used the “General Registration (ANTs)” module rather than the “SlicerANTsPy” module (out of inertia rather than considered preference, and a namespace conflict has meant that it was not possible to have both extensions installed simultaneously).

In my experience, however, inverting a nonlinear transform calculated via ANTs has worked well for aligning a fixed volume to a moving volume, except sometimes near the edges of the image. Transform inversion appears impossibly quick in Slicer because all that is done is change a flag on the component transforms which basically says “use the inverse of the stored transform”; the inverse is not actually calculated at that time. Instead, when you want to look at a slice view transformed by this inverse transform, only the pixel points on that slice need to have their inverse transforms actually calculated, and this set of points is small enough that the inverse is quickly enough calculated that it appears instantaneous. Slicer set up very cleverly in that way, so that the calculations are only actually performed when they are needed for display.

To answer a different question you asked, yes, Slicer automatically reverses the order of application of the inverses of the rigid, affine, and grid transforms when you invert a composite transform.

It is also possible that you are encountering a recently fixed bug where cloning of a composite transform did not preserve the order of application of component transforms, see here: Cloning a composite transform yields incorrect result

If that isn’t the problem, then I would suggest considering the domains over which your transforms are defined (see the pdf notes from the first link above). If the moving image and fixed image have different spatial extents, then the grids over which the forward and reverse transforms make sense to be defined may differ. Alternatively, if you just want a transform which works in each direction (and don’t need them to represent more or less exact inverses of each other), you can do the registration twice in General Registration (ANTs), or it looks like you could do it once in SlicerANTsPy but set it to generate both output directions.