SVD ModelToModel Registration Idea

Hi. I was just thinking on an algorithm to register the same bone with boneModels of different time-points (that means segmented from different CTs).
A frame for each bone can be created using the eigenVectors of the SVD (rotation part) and the meanOfPoints (translation part).
The outputs of the SVD may not respect the right-hand rule, so that needs to be checked. And there are a little bit more technical details regarding this.
Then the registrationTransformFromFrame0ToFrame1 is Frame1ToWorld*WorldToFrame0.
With that transform you can register one bone0 to bone1.

The algorithm uses statistic metrics so it should work well with a little bit noisy inputs.

Do you think this algorithm would be useful? I think it would take maximum=15seconds to execute considering some technical details. The advantage is that it needs no user inputs.
Is this an overkill for something that can be done with point2point registration?

I could develop it if the community shows interest

Mauro

Yes, I think that would be useful. Note that some of the image registration methods have an option for initializing the optimization based on the moments so that might be similar to what you propose (you may want to investigate).

If computing time is a concern you could probably just take a random sample of the model points and still get a good estimate.

Hi. Here is the script, you just need to change your input models and the numberOfSampledPoints to make the registration:

import numpy as np

#SVD Registration

boneModel0 = getNode('fibula')
boneModel1 = slicer.mrmlScene.AddNewNodeByClass('vtkMRMLModelNode',"fibulaMoved")
boneModel1.CreateDefaultDisplayNodes()
boneModel1.CopyContent(boneModel0)
boneModel1DisplayNode = boneModel1.GetDisplayNode()
color = boneModel0.GetDisplayNode().GetColor()
boneModel1DisplayNode.SetColor(color[2],color[0],color[1])

#Create random transformation matrix
import numpy as np
import random
axisOfRotation = np.zeros(3)
angleOfRotation = 0
for i in range(len(axisOfRotation)):
  axisOfRotation[i] = random.random()

axisOfRotation = axisOfRotation/np.linalg.norm(axisOfRotation)

angleOfRotation = random.uniform(45, 315)

origin = np.zeros(3)
for i in range(len(origin)):
  origin[i] = random.uniform(50, 150)

randomTransform = vtk.vtkTransform()
randomTransform.PostMultiply()
randomTransform.RotateWXYZ(angleOfRotation,axisOfRotation)
randomTransform.Translate(origin)

#apply to boneModel1
transformer = vtk.vtkTransformFilter()
transformer.SetInputData(boneModel1.GetPolyData())
transformer.SetTransform(randomTransform)
transformer.Update()
boneModel1.SetAndObservePolyData(transformer.GetOutput())


maskPointsFilter = vtk.vtkMaskPoints()
maskPointsFilter.SetInputData(boneModel0.GetPolyData())

numberOfSampledPoints0 = 2000
ratio0 = int(boneModel0.GetPolyData().GetNumberOfPoints()/numberOfSampledPoints0)

#This works but the sampling could be biased spatially I think
maskPointsFilter.SetOnRatio(ratio0)
maskPointsFilter.RandomModeOn()
maskPointsFilter.Update()

polydata0 = vtk.vtkPolyData()
polydata0.ShallowCopy(maskPointsFilter.GetOutput())


maskPointsFilter = vtk.vtkMaskPoints()
maskPointsFilter.SetInputData(boneModel1.GetPolyData())

numberOfSampledPoints1 = 2000
ratio1 = int(boneModel1.GetPolyData().GetNumberOfPoints()/numberOfSampledPoints1)

#This works but the sampling could be biased spatially I think
maskPointsFilter.SetOnRatio(ratio1)
maskPointsFilter.RandomModeOn()
maskPointsFilter.Update()

polydata1 = vtk.vtkPolyData()
polydata1.ShallowCopy(maskPointsFilter.GetOutput())


#Calculate the SVD and mean
from vtk.util.numpy_support import vtk_to_numpy
model0Points = vtk_to_numpy(polydata0.GetPoints().GetData())
model1Points = vtk_to_numpy(polydata1.GetPoints().GetData())

# Calculate the mean of the points, i.e. the 'center' of the cloud
model0PointsMean = model0Points.mean(axis=0)
model1PointsMean = model1Points.mean(axis=0)

# Do an SVD on the mean-centered data.
uu0, dd0, eigenvectors0 = np.linalg.svd(model0Points - model0PointsMean)
uu1, dd1, eigenvectors1 = np.linalg.svd(model1Points - model1PointsMean)

# Create a frame for boneModel0
model0Z = np.zeros(3)
model0X = eigenvectors0[0]
model0Y = eigenvectors0[1]
vtk.vtkMath.Cross(model0X, model0Y, model0Z)
model0Z = model0Z/np.linalg.norm(model0Z)
model0Origin = model0PointsMean

def getAxes1ToWorldChangeOfFrameMatrix(axis1X,axis1Y,axis1Z,axisOrigin):
  axes1ToWorldChangeOfFrameMatrix = vtk.vtkMatrix4x4()
  axes1ToWorldChangeOfFrameMatrix.DeepCopy((1, 0, 0, 0,
                                        0, 1, 0, 0,
                                        0, 0, 1, 0,
                                        0, 0, 0, 1))
  axes1ToWorldChangeOfFrameMatrix.SetElement(0,0,axis1X[0])
  axes1ToWorldChangeOfFrameMatrix.SetElement(1,0,axis1X[1])
  axes1ToWorldChangeOfFrameMatrix.SetElement(2,0,axis1X[2])
  axes1ToWorldChangeOfFrameMatrix.SetElement(0,1,axis1Y[0])
  axes1ToWorldChangeOfFrameMatrix.SetElement(1,1,axis1Y[1])
  axes1ToWorldChangeOfFrameMatrix.SetElement(2,1,axis1Y[2])
  axes1ToWorldChangeOfFrameMatrix.SetElement(0,2,axis1Z[0])
  axes1ToWorldChangeOfFrameMatrix.SetElement(1,2,axis1Z[1])
  axes1ToWorldChangeOfFrameMatrix.SetElement(2,2,axis1Z[2])
  axes1ToWorldChangeOfFrameMatrix.SetElement(0,3,axisOrigin[0])
  axes1ToWorldChangeOfFrameMatrix.SetElement(1,3,axisOrigin[1])
  axes1ToWorldChangeOfFrameMatrix.SetElement(2,3,axisOrigin[2])
  return axes1ToWorldChangeOfFrameMatrix

model0ToWorldMatrix = getAxes1ToWorldChangeOfFrameMatrix(model0X,model0Y,model0Z,model0Origin)


# temporal frame for boneModel1
model1Z = np.zeros(3)
model1X = eigenvectors1[0]
model1Y = eigenvectors1[1]
vtk.vtkMath.Cross(model1X, model1Y, model1Z)
model1Z = model1Z/np.linalg.norm(model1Z)
model1Origin = model1PointsMean

temporalModel1ToWorldMatrix = getAxes1ToWorldChangeOfFrameMatrix(model1X,model1Y,model1Z,model1Origin)


def getChangeOfSignMatrix(i,j):
  changeOfSignMatrix = vtk.vtkMatrix4x4()
  changeOfSignMatrix.DeepCopy((1, 0, 0, 0,
                                0, 1, 0, 0,
                                0, 0, 1, 0,
                                0, 0, 0, 1))
  if i==0:
    changeOfSignMatrix.SetElement(0,0,1)
  else:
    changeOfSignMatrix.SetElement(0,0,-1)
  #
  if j==0:
    changeOfSignMatrix.SetElement(1,1,1)
  else:
    changeOfSignMatrix.SetElement(1,1,-1)
  #
  if j==i:
    changeOfSignMatrix.SetElement(2,2,1)
  else:
    changeOfSignMatrix.SetElement(2,2,-1)
  #
  return changeOfSignMatrix


# find the registration boneModel0ToBoneModel1Transform

boneModel0ToBoneModel1TransformListWithScore = []
for i in range(2):
  for j in range(2):
    boneModel0ToBoneModel1Transform = vtk.vtkTransform()
    boneModel0ToBoneModel1Transform.PostMultiply()
    #
    worldToBoneModel0Matrix = vtk.vtkMatrix4x4()
    worldToBoneModel0Matrix.DeepCopy(model0ToWorldMatrix)
    worldToBoneModel0Matrix.Invert()
    #
    boneModel0ToBoneModel1Transform.Concatenate(worldToBoneModel0Matrix)
    #
    model1ToWorldTransform = vtk.vtkTransform()
    model1ToWorldTransform.PostMultiply()
    model1ToWorldTransform.Concatenate(getChangeOfSignMatrix(i,j))
    model1ToWorldTransform.Concatenate(temporalModel1ToWorldMatrix)
    #
    boneModel0ToBoneModel1Transform.Concatenate(model1ToWorldTransform)
    #
    #boneModel0ToBoneModel1Transformer
    bone0ToBone1Transformer = vtk.vtkTransformFilter()
    bone0ToBone1Transformer.SetInputData(polydata0)
    bone0ToBone1Transformer.SetTransform(boneModel0ToBoneModel1Transform)
    bone0ToBone1Transformer.Update()
    #
    from vtk.util.numpy_support import vtk_to_numpy
    transformedModel0Points_vtk = bone0ToBone1Transformer.GetOutput().GetPoints().GetData()
    transformedModel0Points = vtk_to_numpy(transformedModel0Points_vtk)
    #
    pointsLocator = vtk.vtkPointLocator()
    pointsLocator.SetDataSet(bone0ToBone1Transformer.GetOutput())
    pointsLocator.BuildLocator()
    #
    distanceList = []
    for k in range(len(model1Points)):
      closestPointOfTransformedBone0ID = pointsLocator.FindClosestPoint(model1Points[k])
      difference = model1Points[k] - transformedModel0Points[closestPointOfTransformedBone0ID]
      distanceBetweenPoints = np.linalg.norm(difference)
      distanceList.append(distanceBetweenPoints)
    #
    distanceArray = np.array(distanceList)
    meanDistance = distanceArray.mean(axis=0)
    #
    boneModel0ToBoneModel1TransformListWithScore.append([boneModel0ToBoneModel1Transform,meanDistance])


boneModel0ToBoneModel1TransformListWithScore.sort(key = lambda item : item[1])

bone0ToBone1RegistrationTransformNode = slicer.vtkMRMLLinearTransformNode()
bone0ToBone1RegistrationTransformNode.SetName("Bone0ToBone1RegistrationTransform")
slicer.mrmlScene.AddNode(bone0ToBone1RegistrationTransformNode)

bone0ToBone1RegistrationTransformNode.SetMatrixTransformToParent(boneModel0ToBoneModel1TransformListWithScore[0][0].GetMatrix())

boneModel0.SetAndObserveTransformNodeID(bone0ToBone1RegistrationTransformNode.GetID())

The registration would work better if the sampled points were evenly distrubuted in the inputMeshes. The random selection of points makes it possible for ‘clusters’ of points of some part of the bone to appear. Although it is unlikely that this ‘clusters’ will be very dense, nevertheless this can make the registration less effective.

Maybe one way to solve this would be to sort the pointIds by geodesicDistance to some seed point, that way (maybe) you could just pickIteratively the next nth point and get an array of evenly distributed points in the mesh.

2 Likes

Using moment or key feature point based initial alignment before robust ICP is a quite common pattern.

@muratmaga would like to get a similar method ported from Open3D to VTK. See some more details here.

You can also have a look at CloudCompare’s mesh registration and if you find methods that work well for you then port those to VTK/Slicer. CloudCompare is GPL, so you cannot use anything from it as is, but it is good for testing an approach before spending time with implementing it (or searching for existing non-restricted implementations).

1 Like

Yes, this sounds very much like how we use open3d to generate, orient and register point clouds derived from models for ALPACA. There are certain issues using open3d within Slicer (mostly very quickly changing API, lack of stability of code base) and would like to see similar methods implemented in VTK. As @lassoan pointed out, the proposal and discussion in summarized here.

If possible, it will be great to join forces, as this will require some funding and development effort.

It is interesting to work with transforms. Please let me know if there is any work proposal.

@muratmaga, @lassoan @pieper: did some of you tested the script? It would be great if you could give feedback.

Is this meant to be run on preview version, as I am getting bunch of errors in stable?

It works in both the stable and the preview versions (2021-10-12)

You just need to define the 2 boneModels with getNode

You may need to decrease the numberOfSampledPoints if your data has very few points

You should replace all the code above this line:

boneModel1.SetAndObservePolyData(transformer.GetOutput())

That line should also be deleted and replaced with:

boneModel0 = getNode('movingBone')
boneModel1 = getNode('fixedBone)

And it should work well

UNIFORM_SPATIAL_SURFACE option of the maskPoints filter may make the registration algorithm work a little bit better. Although I cannot test that because it is only available in VTK 9. So that will have to wait.

I have a scene that looks like this, this is where it starts to fail:

>>> def getAxes1ToWorldChangeOfFrameMatrix(axis1X,axis1Y,axis1Z,axisOrigin):
...   axes1ToWorldChangeOfFrameMatrix = vtk.vtkMatrix4x4()
... axes1ToWorldChangeOfFrameMatrix.DeepCopy((1, 0, 0, 0,
  File "<console>", line 3
    axes1ToWorldChangeOfFrameMatrix.DeepCopy((1, 0, 0, 0,
                                  ^
SyntaxError: invalid syntax

image

>>> def getAxes1ToWorldChangeOfFrameMatrix(axis1X,axis1Y,axis1Z,axisOrigin):
...   axes1ToWorldChangeOfFrameMatrix = vtk.vtkMatrix4x4()
... axes1ToWorldChangeOfFrameMatrix.DeepCopy((1, 0, 0, 0,
  File "<console>", line 3
    axes1ToWorldChangeOfFrameMatrix.DeepCopy((1, 0, 0, 0,

Check if the indentation level is correct. Those lines go inside a function so it should have 2 spaces of indentation. I think your third line doesn’t have the correct indentation. I could replicate your problem by giving the the third line no-indentation.

The code I posted has correct indentation and works well just copy-pasting it to Slicer in my PC. I don’t know why you had this problem.

Let me know if you could get the script to work

I copy/pasted it too… I had this problem before, I think there is some sort of whitespace conversion happening on different platforms.

Here is the code. This an example of registration by SVD decomposition of the data. Useful to register very similar bones · GitHub
Let me know if it works for you.

Thanks @mau_igna_06 for sharing the code. I used a copy of the same model. this is the results.

1 Like

Thanks Manjula for being a tester.

The interesting thing to test next with this registration algorithm is two segmented bones of the same patient obtained by segmentation of two CTs taken on different dates.

I modified the script to also register linear scaling differences of the inputs: Register two similar bones that has diffences in position, orientation and size (scaling) · GitHub

1 Like

This script is useful. It would be great if you could make it available as a module so that users can try it more easily. Maybe you could add it to the Model Registration module in SlicerIGT extension.

1 Like

I could do that although I would prefer someone tests this algorithm with real data first.

Maybe change the title to “New registration method: SVD Registration”

I tested with two long bones (fibulas) of different patients. It doesn’t work so well. I only found useful the non-scaling version of the algorithm (because the scaling version makes the transforming mesh worst) and the only utility I found was to make the two bones share an axis (direction of the first eigenvector of bone1) that should be near the anatomical axis direction and make the centroids of the two bones coincide.

The problem is: the algorithm depends in accuracy of created frames. And the frames creation is done by finding the directions of most variation of the sample of points of the bones and using the centroids as origins. Most bones have one principal axis of variation (or maybe simmetry) that is anatomically stable (repeatable on different animals of the same species). But I don’t think it is usual there are two. So the direction of the second vector of the frame is unstable. And this makes the movedBone and the fixedBone be rotated around the fixedBone 1st eigenvector.

Maybe the rotation problem could be faced iteratively because you know the axis of rotation. I don’t know

I think canonical correlation could be used (and may get I little bit better results). I’m not sure if this way would avoid the problems I mentioned in the earlier comment.

Maybe with this implementation but I’m not sure how to make that matrix multiplication using the transform filter

I found and algorithm that defines similarity matrix (rotation, translation and scaling).

How do you recommend to try it @lassoan?