Intrepreting Model to model distance results

I have a mouse mandible and corresponding landmarks. I cloned the landmark set, and moved only one landmark quite a bit (the pink landmark is the original, and I moved to the tip of the incisor). Then I created a warping transform using Fiducial Registration wizard. I applied this transform to a clone of the original model and hardened it. Then I used to Model to Model distance to visualize the distances. In the first pass the source model was the deformed mandible, and the target was the original, and the distance metric set to signed_closest_point. Then I set the scalar value range, I get a meaningful (i.e., matches the transformation I applied) representation (top image). When I swap the source model to be original and the target to be the deformed model, the calculated values doesn’t look similar (bottom image). In both cases I constrained the display range of scalar values of the resultant model to be -10 to 10.

Why are the heat maps so different? Or when such transformation is unknown, how does one decide what is the source model and what is the target?

Would probably help to have a small example dataset with only a few fiducials to exactly reproduce. The calculation should be symmetric and not care which is source and target.

I believe Model to model distance uses the vtkDistancePolyDataFilter which calculates the distance from a point x on the source to the nearest point p on the target. I wouldn’t necessarily expect it to be symmetric, unless I am missing something?

1 Like

Here it is:

A scene with a single model, two sets of landmarks that vary position in only the first one. I didn’t add the others not to inflate the scene size, but will only take few clicks to replicate.

1 Like

In this particular case where two models have the exact same vertex count, using the distance metric corresponding point to point gives symmetrical results:


By default, model to model distance computes signed_closest_point. It does not rely on point correspondence, just on point to surface distance, so it should not matter if vertex count is the same or not, and operation is not commutative.

If you warp a model, so points in the two models correspond to each other, then it may make sense to compute point-to-point distance, using the corresponding_point_to_point method.

The implementation of these metrics is very straightforward - you can find it here:

How can we warp a model to have same of points in correspondence to the base model?

If you clone a model, and apply the warp to the cloned one, both models should have the same number of vertices and points correspond.

You can easily create such warp, by cloning an existing set of landmarks, then shifting one drastically in the cloned mode, and then use the Fiducial Registration Wizard from the SlicerIGT extension.

Thank You, Muratmaga!