Operating system: Windows 11 Home
Slicer version: 5.0.2
Expected behavior:
Actual behavior:
Hello everybody,
I’m in trouble with the module ‘Transforms’ in 3D Slicer. My goal is to compare the transform matrix obtained after a “Landmarks Registration” with the one obtained with a manual registration in terms of Eulero angles and translations.
However, these values are not congruent with a visual aligment because transform matrix, as I read in the documentation, is composed by concatenated rotations etc.
So, is there a way to know the “real” transform matrix (i.e. the overall Eulero angles and translations needed to align two volumes ) from the one showed in 3D Slicer?
It’s not clear what you mean by “real” here, since Euler angles are not uniquely defined for any given rotation matrix. The Transform matrix is the ‘real’ transform. If you want the pure rotation you could decompose the rotation into a unit quaternion.
Hi @lassoan,
let me explain better. The goal for my project is to quantify the Landmarks registration’s result as compared to a manual registration’s one.
The Landmarks registration’s transform is:
while the manual’s one is:
So I decided to compute the difference between the two transform matrix.
This is the result in Eulero Angles’ terms and translations computed in Matlab:
Translation: [155.3490 , 18.5039 , 54.7138] in [mm] (absolute values)
However, it seems that these values are incogruent with the the visual alignment.
Below an example of overlap of the two volumes. In the foreground visualization there’s the US volume while in the background visualization the CT one. The first row refers to landmarks registration, the second row refers to manual registration.
You cannot characterize registration quality by analyzing the registration transforms or the difference between the transforms. The Eluer angles are only meaningful if you have one large rotation angle (there can be a second angle close to zero, and a third angle very close to zero). The translation component does not mean much if there is rotation because the actual position difference depends a lot on distance from the center of rotation and the rotation angles.
Instead, the most commonly used clinically relevant registration evaluation metric is target registration error (TRE) computed from average distance of clearly identifiable landmark points near the region of interest. For 3D cardiac echo to CT/MRI registration you can use the commissure points along the annulus. For a tricuspid valve it would give you 3 points, which is sufficient. For a bicuspid valve you would only get 2 commissure points, so you would need to identify at least one more point along the annulus, approximately halfway between the commissures.
What do you mean by “manual registration”? Iteratively translating rotating one image to try to visually align with the other? That’s just a very time-consuming, tedious, and inaccurate procedure that should not be even considered for 3D registration. Such manual visual alignment procedure works well for 2D images, because it is a direct method (or just requires very few iterations) in 2D and you need to look at only one image. However, visual alignment usually requires many iterations in 3D, because translation has 3 axes instead of 2, and rotation has 3 axes instead of just 1, and you need to keep looking at least two views in different orientations and may even need to scroll through them for each iteration, and there is no guarantee that your results get better with each iteration.