There are 2 transformation files. I managed to apply it to the vasculature thanks to SimpleITK Python library.
I dit not manage to do the same with the second transformation file.
This second transformation is of type : ThinPlateSplineKernelTransform_double_3_3
I already read some subjects on this forum about this issue but none of the existing solutions seems to work in my case.
1/ The “standard” SimpleITK Python library ('version 2.2.0rc2) can not read the file:
"Could not create an instance of “ThinPlateSplineKernelTransform_double_3_3”
The usual cause of this error is not registering the transform with TransformFactory … "
The SimpleITK python package installed inside Slicer3D (Python interactor) can read it. It returns SimpleITK.SimpleITK.CompositeTransform. But the object cannot execute:
" Only storage methods are implemented for InverseThinPlateSplineKernelTransform"
2/ I managed to invert this transformation thanks to Slicer3D transform tab and save it. Then I can open it with the Python Interactor but I cannot obtain the inverse (what I need) :
“sitk::ERROR: Unable to create inverse!”
3/ I cannot load the vasculature into Slicer3D to apply the transformation because the Dataset is huge ( > 20 GB)
4/ I tried to convert it to a displacement but without success.
Is there something else that I can try ? Thank you for your help, Jean
This is due to ITK’s limited support for inverted transforms. You often need a transform and the inverse of that transform, but ITK lacks two features:
Save a transform and indicate with a flag that it is an inverse transform.
Compute inverse of a transform dynamically, at any position.
We addressed (1) in Slicer by adding special “inverse” transform types (such as InverseThinPlateSplineKernelTransform). These transform are only used to allow reading/writing inverse transforms in ITK files. If you try to apply these inverse transform types to an image then you get an error (Only storage methods are implemented for InverseThinPlateSplineKernelTransform).
We addressed (2) in Slicer by using VTK to apply transforms to images, meshes, point lists, etc. Unlike ITK, VTK can compute inverse of a transform at any position, in real-time very robustly and efficiently (and it can even do it for composite transforms, i.e., a list of concatenated transforms).
If you want to do everything in ITK then you have to implement inverse transform support yourself.
One option is to not use Inverse...Transform... classes, but always store the “forward” transform and if you need an inverse transform then compute it using InverseDisplacementFieldImageFilter. It will be very slow and brittle, but if you only do batch processing then it may be acceptable.
Another option is to do something similar that Slicer does (incorporate transform direction flag in transform files and/or use VTK transforms). You can either reimplement these features or you can use Slicer’s implementation. Since Slicer can do everything out of the box, using Slicer would be probably the easiest solution, but of course it would add a dependency to your processing workflow.
Using Slicer Python API would be the perfect solution for me.
I have been searching for documentation, examples or tutorials on using Slicer Python API for registration/transformation without success.
Could you please provide me with a link to such a material.
For information, following is the SimpleITK equivalent of what I want to perform with slicer.
transform = SimpleITK.ReadTransform(path)
registered_vasculature_points = list()
for point in 3d_array.astype(np.double):
registered_vasculature_points.append(transform.TransformPoint(point))
You can add a scalar volume node to the scene and then set its voxels from a numpy array using updateVolumeFromArray (see example in the script repository).
My dataset is quite huge (1um resolution for the full mouse brain vasculature). Thus, I cannot convert my numpy array of shape (N, 3) (representing X, Y and Z cartesian 3d coordinates of node) to a Matrix of shape (30000, 40000, 65000) like it is in the script you provided to me.
Is there a slicer utility function that allows to create or fill) a VolumeNode from 3D points like for vtkPoints for example ?
vpoints = vtk.vtkPoints()
vpoints.SetNumberOfPoints(points.shape[0])
for i in range(points.shape[0]):
vpoints.SetPoint(i, points[i])
vpoly = vtk.vtkPolyData()
vpoly.SetPoints(vpoints)
To work with this data as a volume you could make a more manageable sized array, say 1k^3, and make a kind of histogram of the number of sample points within a given voxel. From there you could volume render or segment the result.
Thank you for your answer. If I understand correctly, this solution is a workaround to work with a huge dataset with a quite important loss of precision due to the histogram. It is well noted and I will use it if I cannot find a way to apply the transform without loss of precision.
1/ If I split my “huge” volume into smaller volumes that fit with my memory size and apply the full transform on every sub volume, does the resulting merge of all submodules after transformation provide a correct result ?
2/ ITK allows to apply a transform on 3D points with the following method (itk::simple::Transform::TransformPoint).
Does a similar method exist in MRML lib ?
2/ ITK allows to apply a transform on 3D points with the following method (itk::simple::Transform::TransformPoint).
Does a similar method exist in MRML lib ?
Once you have created the scalar volume nodes, you will need a linear transform node to do the transformation and then harden it for each parcell… regarding your question about transforming points, you already can do this on Slicer’s python interactor: https://vtk.org/doc/nightly/html/classvtkAbstractTransform.html#a978fb5d88da39f847dbcd562f5eaa433
using a vtkTransform and setting it’s matrixToParent
Thanks for the update. You can also achieve what you described by loading your vasculature model into a model node a s then harden the transform on the model.