Creating a new coordinate system

Hello all,

I am trying to create a local coordinate system within the global coordinate system in order to compute the displacement of a bone block between two different scans. The reason why the global coordinate system does not suffice is because I would like the rotational displacement of the bone block to be in terms of the orientation of the reference bone block rather than the global space.

Could someone please provide some guidance on how to create a local coordinate system?

Thank you very much in advance,



I am not sure about creating a ‘local’ coordinate system, but here are two things that have worked for me in the recent past;

  1. Center all of your models to the origin of the scene itself. You can do this through the Python interactor by calculating the centroid of your model, then the distance between the centroid and the origin, and then translating the model to the origin. If you do this, then your rotation will be about the centroid of the model, as you would hope. If you have a secondary model that can be placed at an arbitrary distance from the origin, now you know that arc-length of its rotation. This may take a bit of programming, but it has helped me in a few workflows I am developing :sweat_smile:

  2. You can rotate nodes in slicer about ‘points’ or ‘lines’ using the Python interactor also. Maybe you only need one of your objects to rotate about a long bone, where the length of the bone is defined as axis of rotation…?

Perhaps you can expand a little on your intended goal and current problem. What are these ‘bone blocks’? Are you trying to recreate the rotation of a joint from bones segmented from different scans?

Let me know!

Hi Fluvio,

Thank you very much for your reply!

  1. I’ve used the same method as you for translating the centroid to the origin, but I didn’t know there was a way to use the arc-length to calculate the rotation! This is good to know. I am quite an inexperienced coder (and by that I mean 0 experience, haha), but this seems very useful!
  2. I have not tried this method, but it may not be ideal, as it is crucial in my project that the three axes are orthogonal to one another.

To explain my project further, I have attached images below.

The model seen in the 3D view is the bone block of interest, and the blue model is the neutral condition while the red model is the condition (models are overlaid on top of each other). While they appear to be registered, they are not, as the difference in rotation we are looking for is expected to be ≤ 1˚. In the images attached, the centroids of both the neutral and the condition bone block have been moved to the origin.
Our objective is to find the amount of translation and rotation of the condition bone block relative to the neutral bone block.
In method #1 which you have explained in your reply, did you move the centroids of both models to the origin? or just the reference model?
Further, it would be awesome if you could provide me with some guidance as to where I could learn the coding for using the arc length to calculate the rotation.

Thank you!


P.S. Please note that the red axes in both images were added to a screenshot rather than created in 3d slicer. The top image represents the global coordinate system, and the second image is what I was hoping for the local coordinate system to look like.

1 Like


Glad I could help!
I am also happy you shared more details, as I think there are better strategies that can help you.

Our objective is to find the amount of translation and rotation of the condition bone block relative to the neutral bone block.

Two approaches;

  1. Model-to-Model Distance. Since the “bone blocks” are similar in shape, you could use this module to calculate the point-to-point displacement between the two. I am pretty sure you can then use this displacement map to approximate an overall transformation matrix that describes the overall translation and rotation matrix that describes the displacement of the part.

  2. Fiducial Registration Wizard. You can use fiducials to determine the rigid transformation that describes the displacement of the part. This module returns the transformation matrix itself! You have to download the SlicerIGT module to use this tool.

In method #1 which you have explained in your reply, did you move the centroids of both models to the origin? or just the reference model?
Further, it would be awesome if you could provide me with some guidance as to where I could learn the coding for using the arc length to calculate the rotation.

If you use the two new solutions I propose here, you wont need to program at all :rofl:. Assuming the result is what you want of course.

Now, I guess the other question is… are the frames of reference aligned? In other words, are the two imaging datasets (DICOM) aligned? If they are not, you may need to align them to remove that additional transformation from your final displacement. Does that make sense?

Hi again,

I have not tried using

yet, but I have tried using the IGT model registration module. The only problem with this is that when I use the matrix to extract the rotational values, it’s in terms of the global coordinate system. I have attached an image with planes running through the model, to demonstrate how I would like the coordinate system to be oriented.

In terms of

I am unsure whether the bone blocks have enough reliable landmarks for it to be accurate. Further, I believe the same problem regarding the coordinate system still exists for this method.

I have aligned the two datasets already:)

1 Like

This bone looks like a thick elliptical disk. If I were you I would make PCA to the points of the bone mesh. With that you would get two vectors that you could use to create a frame (coordinate system) I think. Hope it helps


What about;

  1. Moving both models to the center of the scene (origin)
  2. Creating three orthogonal planes that delineate the local coordinate system (or more like the basis) on both models (e.g. xy1, xz1, yz1, xy2, xz2, yz2)… similar to what you did on your screenshot…
  3. Calculate the angles between corresponding planes using the Python interactor and this example… this should be equivalent to a dot product between vectors, so the angle will not be tied to the overall coordinate system as in “angle planes” module you are using.

The challenge here is making sure those planes are made with the same features/landmarks on the model. My guess is that you already thought of this though.

Hello Mauro,

Thank you for your response. I am unfamiliar with the term PCA, but is that “Principal Components Analysis”?
I will look into this, thanks again.


1 Like

Hi again Fluvio,

Yes, I have thought about this but I haven’t found a way to make sure that the planes are orthogonal to each other… Do you know how to do this?
Perhaps if I can find a way to ensure this, calculating the angles between the corresponding planes is my best bet!


1 Like

Yes. I’m referring to Principal Component Analysis


If you have a linear transformation matrix which takes the position of one bone block to the other (as can be produced as an output from any Slicer registration technique), you can easily extract the rotation component of that transform using a module I developed for Slicer called CharacterizeTransformMatrix (available at GitHub - mikebind/SlicerCharacterizeTransformMatrix). I’m not sure what you want to use the new coordinate system for, but if you just want to know how rotated one block is relative to the other, this module will answer that.


Give PCA a try. I am not familiar with the process myself but I think this will ensure your analysis process carries less user error. Also, PCA seems like the more scalable solution.

If you still want to do something with planes, let me know!

1 Like

Your module looks good and I think it will be very useful for lots of Slicer users.

Although I didn’t catch how you calculate the scaleMatrices from the eigenvectors and eigenValues here:

I suppose you have some formula to create a symmetric matrix but it’s just a guess.

1 Like

Yes, each scale matrix is symmetric, and is constructed by computing the outer product of the eigenvector with itself (which is symmetric by construction), multiplying that by the (eigenvalue - 1) and adding the result to the identity matrix. Each scale matrix has the effect of stretching vectors by a factor of the eigenvalue in the direction of the eigenvector (for the eigenvalue and eigenvector used to construct it).

1 Like

Thanks. That was a good explanation. I didn’t know that formula

1 Like

You can define a coordinate system using a markups plane node:

The advantage of using this instead of a transform that you can conveniently place the node using 3 points and then further adjust its position and orientation using interaction handles.

Once you have defined the plane, you can get the the transform between this coordinate system and the RAS coordinate system like this:

planeToWorld = vtk.vtkMatrix4x4()

If you concatenate this transform with the result of the model registration then you’ll get the rotation and translation between the bones in the plane coordinate system.


Hello Andras,

Thank you very much for your response!
I just had a question while I was attempting this method: Is there a code to place a fiducial at a specific coordinate?
For example, as seen in the image below, the centroid of the red segment is at (-110.676, -105.632, 31.2008), and I would like to set my ‘P’ fiducial at this point.

Thank you for your help in advance.


You can use markupsNode.SetNthControlPointPositionWorld(i, r, a, s) method.

I think this it is a common need to define a frame that follows some principal directions of the model. I’ve created a script for this. Also, it allows interaction to transform the model and give it different position/orientation.

Here is the code:

#Create a frame for the model in the principal components directions, 
#set up a plane with interaction handles to move it.
#The script can be reused to allow interactions with n models

import numpy as np

#Change this number for more accuracy (more points, take more time to compute)
numberOfSampledPointsOfModel = 2000
model = getNode('mymodel')

maskPointsFilter = vtk.vtkMaskPoints()

if model.GetPolyData().GetNumberOfPoints() > numberOfSampledPointsOfModel:
  ratio = round(model.GetPolyData().GetNumberOfPoints()/numberOfSampledPointsOfModel)
  ratio = 1

#This works but the sampling could be biased spatially I think
#If you have vtk9 you should use  UNIFORM_SPATIAL_SURFACE option

polydata = vtk.vtkPolyData()

#Calculate the SVD and mean
from vtk.util.numpy_support import vtk_to_numpy
modelPoints = vtk_to_numpy(polydata.GetPoints().GetData())

# Calculate the mean of the points, i.e. the 'center' of the cloud
modelPointsMean = modelPoints.mean(axis=0)

# Do an SVD on the mean-centered data.
uu0, dd0, eigenvectors0 = np.linalg.svd(modelPoints - modelPointsMean)

# Create a frame for the model
modelZ = np.zeros(3)
modelX = eigenvectors0[0]
modelY = eigenvectors0[1]
vtk.vtkMath.Cross(modelX, modelY, modelZ)
modelZ = modelZ/np.linalg.norm(modelZ)
modelOrigin = modelPointsMean

#Create the plane to move the model
planeNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLMarkupsPlaneNode", slicer.mrmlScene.GetUniqueNameByString("modelToWorldPlane"))

displayNode = planeNode.GetDisplayNode()


#Create the transformNode that the model will observe
planeToWorldTransformNode = slicer.vtkMRMLLinearTransformNode()

#Save initial transform of the plane
worldToInitialModelTransformNode = slicer.vtkMRMLLinearTransformNode()

planeToWorldMatrix = vtk.vtkMatrix4x4()


worldToInitialModelMatrix = vtk.vtkMatrix4x4()



#update model's position/orientation interactively
def onPlaneModified(sourceNode,event):
  planeToWorldMatrix = vtk.vtkMatrix4x4()
  planeNodeToWorldTransformNode = slicer.mrmlScene.GetNodeByID(sourceNode.GetAttribute('planeNodeToWorldTransformNodeID'))

planeNodesAndObservers = []


#If you want to stop interactions for plane i, do this:
i = 0#change this number

#If you want the model to world transform AFTER YOU MOVE IT for plane i, do this:
i = 0#change this number
worldToInitialModelTransformNode = slicer.mrmlScene.GetNodeByID(planeNodesAndObservers[i][0].GetAttribute('worldToInitialModelTransformNodeID'))
modelToWorldMatrix = vtk.vtkMatrix4x4()

#If you just want to define a frame for the model (assigned to plane i), do this:
i = 0#change this number
worldToInitialModelTransformNode = slicer.mrmlScene.GetNodeByID(planeNodesAndObservers[i][0].GetAttribute('worldToInitialModelTransformNodeID'))
initialModelToWorldMatrix = vtk.vtkMatrix4x4()

@lassoan maybe this can be added to the script repository

Thank you @mau_igna_06 this is example works very nicely. It would be probably too long for including it entirely in the script repository but you can upload it into a gist and we can link it from the script repository (as we do it for other longer examples). Gists are version controlled, so you can update the gist and the same link will automatically post to the latest version (and user has access to previous versions of the gist, too).

There are also a few things to fix/improve, such as:

  • Node IDs are not persistent (when you save the scene and load it again a different ID may be assigned to the node), therefore they must never stored in node attributes. Instead, node references can be used, which maintain link between nodes, even if the node IDs change. Since parent transform is already stored using node references, you don’t even need to add another node reference.
  • Simplify the sampling. If ratio=1 then just use the model output and skip the filter. It would be also more relevant to use the VTK9 filter, as VTK8 will not be around for long.
  • Always leave a space after the # character
  • Instead of hardcoding any specific revision, put the method call in a try/except block (put the more recent API in the main path and the old API in the except path), as it is more pythonic and may be faster (not that speed would matter here, just as a general note)
  • Instead of updating the transform in the main path and then implementing it again in onPlaneModified function, just call onPlaneModified once.
  • I would remove the multiple planes support and the i variable to keep things simple.
  • Remove the THIS PART SHOULD ONLY BE COPYPASTED ONCE TILL MARKED comments. It is understood that the entire code snippet is to be executed once. It would just confuse users that they are not allowed to copy-paste some parts.
  • Comment out the …RemoveObserver… line so that if the user copy-pastes the entire code snippet the axes are kept interactive

Two notes for using principal axes of surface mesh points as object coordinate system:

  1. For solid objects Segment Statistics module gives more relevant axis directions. Principal X, Y, Z axes computed by Segment Statistics provides the object’s true principal axes of inertia, based on its volume. PCA of surface mesh points can be quite similar for many shapes, but they are not the same. Even if you sample the surface uniformly, a small feature with a large surface area can throw off the axis directions computed by surface mesh points PCA.

  2. For models that have flat sides (such as CAD models) Segment Statistics module’s oriented bounding box axis directions are more relevant than principal axes. The oriented bounding box axes will not be slightly off if you have small attachments or engravings in the object, but the axes will be generally aligned with flat sides.