ThinPlateSplineKernelTransform_double_3_3

The author of a this paper (Brain microvasculature has a common topology with local differences in geometry that match metabolic load - ScienceDirect) published his data: a mouse brain vasculature and the transormation files needed to aligned the vasculature to an ATLAS.

There are 2 transformation files. I managed to apply it to the vasculature thanks to SimpleITK Python library.

I dit not manage to do the same with the second transformation file.

This second transformation is of type : ThinPlateSplineKernelTransform_double_3_3

I already read some subjects on this forum about this issue but none of the existing solutions seems to work in my case.

1/ The “standard” SimpleITK Python library ('version 2.2.0rc2) can not read the file:

"Could not create an instance of “ThinPlateSplineKernelTransform_double_3_3”

The usual cause of this error is not registering the transform with TransformFactory … "

The SimpleITK python package installed inside Slicer3D (Python interactor) can read it. It returns SimpleITK.SimpleITK.CompositeTransform. But the object cannot execute:

" Only storage methods are implemented for InverseThinPlateSplineKernelTransform"

2/ I managed to invert this transformation thanks to Slicer3D transform tab and save it. Then I can open it with the Python Interactor but I cannot obtain the inverse (what I need) :

“sitk::ERROR: Unable to create inverse!”

3/ I cannot load the vasculature into Slicer3D to apply the transformation because the Dataset is huge ( > 20 GB)

4/ I tried to convert it to a displacement but without success.

Is there something else that I can try ? Thank you for your help, Jean

This is due to ITK’s limited support for inverted transforms. You often need a transform and the inverse of that transform, but ITK lacks two features:

  1. Save a transform and indicate with a flag that it is an inverse transform.
  2. Compute inverse of a transform dynamically, at any position.

We addressed (1) in Slicer by adding special “inverse” transform types (such as InverseThinPlateSplineKernelTransform). These transform are only used to allow reading/writing inverse transforms in ITK files. If you try to apply these inverse transform types to an image then you get an error (Only storage methods are implemented for InverseThinPlateSplineKernelTransform).

We addressed (2) in Slicer by using VTK to apply transforms to images, meshes, point lists, etc. Unlike ITK, VTK can compute inverse of a transform at any position, in real-time very robustly and efficiently (and it can even do it for composite transforms, i.e., a list of concatenated transforms).

If you want to do everything in ITK then you have to implement inverse transform support yourself.

One option is to not use Inverse...Transform... classes, but always store the “forward” transform and if you need an inverse transform then compute it using InverseDisplacementFieldImageFilter. It will be very slow and brittle, but if you only do batch processing then it may be acceptable.

Another option is to do something similar that Slicer does (incorporate transform direction flag in transform files and/or use VTK transforms). You can either reimplement these features or you can use Slicer’s implementation. Since Slicer can do everything out of the box, using Slicer would be probably the easiest solution, but of course it would add a dependency to your processing workflow.

2 Likes

Thank you for being so helpful. :pray:

Using Slicer Python API would be the perfect solution for me.
I have been searching for documentation, examples or tutorials on using Slicer Python API for registration/transformation without success.

Could you please provide me with a link to such a material.
For information, following is the SimpleITK equivalent of what I want to perform with slicer.

transform = SimpleITK.ReadTransform(path)

registered_vasculature_points = list()
for point in 3d_array.astype(np.double):
   registered_vasculature_points.append(transform.TransformPoint(point))

You can call ApplyTransform method of a node to persistently modify it with a transform (corresponds to “Harden transform” on Slicer GUI).

What is the Python type of the node on which I can call ApplyTranform ? vtkActor ?

As @lassoan wrote it is a method of a MRML node.

1 Like

I succeed to apply the transformation (and the inverse transform btw) to a vtkMRMLScalarVolumeNode (the mrHead from the sample data).

But I do not figure out how to create a vtkMRMLScalarVolumeNode from my numpy array of 3D cartesian coordiates of shape (N, 3).

Could you provide my any tips or clues ?

I managed to create a vtkPoints and a vtkPolyData objects from my numpy 3D array.

What do these coordinates represent exactly?

You can add a scalar volume node to the scene and then set its voxels from a numpy array using updateVolumeFromArray (see example in the script repository).

A mouse brain vasculature. The numpy array of shape (N, 3) represents X,Y and Z coordinates of the nodes that owned the vasculature

My dataset is quite huge (1um resolution for the full mouse brain vasculature). Thus, I cannot convert my numpy array of shape (N, 3) (representing X, Y and Z cartesian 3d coordinates of node) to a Matrix of shape (30000, 40000, 65000) like it is in the script you provided to me.
Is there a slicer utility function that allows to create or fill) a VolumeNode from 3D points like for vtkPoints for example ?

    vpoints = vtk.vtkPoints()
    vpoints.SetNumberOfPoints(points.shape[0])
    for i in range(points.shape[0]):
        vpoints.SetPoint(i, points[i])
    vpoly = vtk.vtkPolyData()
    vpoly.SetPoints(vpoints)

To work with this data as a volume you could make a more manageable sized array, say 1k^3, and make a kind of histogram of the number of sample points within a given voxel. From there you could volume render or segment the result.

Thank you for your answer. If I understand correctly, this solution is a workaround to work with a huge dataset with a quite important loss of precision due to the histogram. It is well noted and I will use it if I cannot find a way to apply the transform without loss of precision.

1/ If I split my “huge” volume into smaller volumes that fit with my memory size and apply the full transform on every sub volume, does the resulting merge of all submodules after transformation provide a correct result ?

2/ ITK allows to apply a transform on 3D points with the following method (itk::simple::Transform::TransformPoint).
Does a similar method exist in MRML lib ?

Yes

2/ ITK allows to apply a transform on 3D points with the following method (itk::simple::Transform::TransformPoint).
Does a similar method exist in MRML lib ?

Once you have created the scalar volume nodes, you will need a linear transform node to do the transformation and then harden it for each parcell… regarding your question about transforming points, you already can do this on Slicer’s python interactor:
https://vtk.org/doc/nightly/html/classvtkAbstractTransform.html#a978fb5d88da39f847dbcd562f5eaa433
using a vtkTransform and setting it’s matrixToParent

1 Like

After so many tested methods and readed documentation, I finally found the solution to my issue.
And the solution is so simple …

Thank you very much lassoan, cpinter and mau_igna_06 for your time and your help.

The solution In a nutshell

1/ convert my 3d cartesian coordinate to VTK coordinate system (RAS)

2/ Opening my transformation file (the one that contains ThinPlateSplineKernelTransform_double_3_3 transform) with slicer.util.loadTransform() method

3/ Get the vtkThinPlateSplineTransform using the GetTransformFromParent() method

4/ Use the TransformPoint method to apply the transform on each 3D point coordinate.

2 Likes

Thanks for the update. You can also achieve what you described by loading your vasculature model into a model node a s then harden the transform on the model.