This is due to ITK’s limited support for inverted transforms. You often need a transform and the inverse of that transform, but ITK lacks two features:
Save a transform and indicate with a flag that it is an inverse transform.
Compute inverse of a transform dynamically, at any position.
We addressed (1) in Slicer by adding special “inverse” transform types (such as InverseThinPlateSplineKernelTransform). These transform are only used to allow reading/writing inverse transforms in ITK files. If you try to apply these inverse transform types to an image then you get an error (Only storage methods are implemented for InverseThinPlateSplineKernelTransform).
We addressed (2) in Slicer by using VTK to apply transforms to images, meshes, point lists, etc. Unlike ITK, VTK can compute inverse of a transform at any position, in real-time very robustly and efficiently (and it can even do it for composite transforms, i.e., a list of concatenated transforms).
If you want to do everything in ITK then you have to implement inverse transform support yourself.
One option is to not use Inverse...Transform... classes, but always store the “forward” transform and if you need an inverse transform then compute it using InverseDisplacementFieldImageFilter. It will be very slow and brittle, but if you only do batch processing then it may be acceptable.
Another option is to do something similar that Slicer does (incorporate transform direction flag in transform files and/or use VTK transforms). You can either reimplement these features or you can use Slicer’s implementation. Since Slicer can do everything out of the box, using Slicer would be probably the easiest solution, but of course it would add a dependency to your processing workflow.
My dataset is quite huge (1um resolution for the full mouse brain vasculature). Thus, I cannot convert my numpy array of shape (N, 3) (representing X, Y and Z cartesian 3d coordinates of node) to a Matrix of shape (30000, 40000, 65000) like it is in the script you provided to me.
Is there a slicer utility function that allows to create or fill) a VolumeNode from 3D points like for vtkPoints for example ?
vpoints = vtk.vtkPoints()
for i in range(points.shape):
vpoly = vtk.vtkPolyData()
To work with this data as a volume you could make a more manageable sized array, say 1k^3, and make a kind of histogram of the number of sample points within a given voxel. From there you could volume render or segment the result.
Thank you for your answer. If I understand correctly, this solution is a workaround to work with a huge dataset with a quite important loss of precision due to the histogram. It is well noted and I will use it if I cannot find a way to apply the transform without loss of precision.
1/ If I split my “huge” volume into smaller volumes that fit with my memory size and apply the full transform on every sub volume, does the resulting merge of all submodules after transformation provide a correct result ?
2/ ITK allows to apply a transform on 3D points with the following method (itk::simple::Transform::TransformPoint).
Does a similar method exist in MRML lib ?