Creating a new coordinate system

You can use markupsNode.SetNthControlPointPositionWorld(i, r, a, s) method.

I think this it is a common need to define a frame that follows some principal directions of the model. I’ve created a script for this. Also, it allows interaction to transform the model and give it different position/orientation.

frameOfTibiaInPCDirections

Here is the code:

#Create a frame for the model in the principal components directions, 
#set up a plane with interaction handles to move it.
#The script can be reused to allow interactions with n models

import numpy as np

#Change this number for more accuracy (more points, take more time to compute)
numberOfSampledPointsOfModel = 2000
model = getNode('mymodel')

maskPointsFilter = vtk.vtkMaskPoints()
maskPointsFilter.SetInputData(model.GetPolyData())

if model.GetPolyData().GetNumberOfPoints() > numberOfSampledPointsOfModel:
  ratio = round(model.GetPolyData().GetNumberOfPoints()/numberOfSampledPointsOfModel)
else:
  ratio = 1

#This works but the sampling could be biased spatially I think
#If you have vtk9 you should use  UNIFORM_SPATIAL_SURFACE option
maskPointsFilter.SetOnRatio(ratio)
maskPointsFilter.RandomModeOn()
maskPointsFilter.Update()

polydata = vtk.vtkPolyData()
polydata.ShallowCopy(maskPointsFilter.GetOutput())

#Calculate the SVD and mean
from vtk.util.numpy_support import vtk_to_numpy
modelPoints = vtk_to_numpy(polydata.GetPoints().GetData())

# Calculate the mean of the points, i.e. the 'center' of the cloud
modelPointsMean = modelPoints.mean(axis=0)

# Do an SVD on the mean-centered data.
uu0, dd0, eigenvectors0 = np.linalg.svd(modelPoints - modelPointsMean)

# Create a frame for the model
modelZ = np.zeros(3)
modelX = eigenvectors0[0]
modelY = eigenvectors0[1]
vtk.vtkMath.Cross(modelX, modelY, modelZ)
modelZ = modelZ/np.linalg.norm(modelZ)
modelOrigin = modelPointsMean

#Create the plane to move the model
planeNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLMarkupsPlaneNode", slicer.mrmlScene.GetUniqueNameByString("modelToWorldPlane"))
slicer.modules.markups.logic().AddNewDisplayNodeForMarkupsNode(planeNode)

displayNode = planeNode.GetDisplayNode()
displayNode.SetGlyphScale(2.5)
displayNode.SetOpacity(0)
displayNode.HandlesInteractiveOn()

planeNode.SetAxes(modelX,modelY,modelZ)
planeNode.SetOrigin(modelOrigin)

#Create the transformNode that the model will observe
planeToWorldTransformNode = slicer.vtkMRMLLinearTransformNode()
planeToWorldTransformNode.SetName(slicer.mrmlScene.GetUniqueNameByString("planeToWorld"))
slicer.mrmlScene.AddNode(planeToWorldTransformNode)

#Save initial transform of the plane
worldToInitialModelTransformNode = slicer.vtkMRMLLinearTransformNode()
worldToInitialModelTransformNode.SetName(slicer.mrmlScene.GetUniqueNameByString("worldToInitialModel"))
slicer.mrmlScene.AddNode(worldToInitialModelTransformNode)

SLICER_CHANGE_OF_API_REVISION = '29927'
planeToWorldMatrix = vtk.vtkMatrix4x4()
if int(slicer.app.revision) > int(SLICER_CHANGE_OF_API_REVISION):
  planeNode.GetObjectToWorldMatrix(planeToWorldMatrix)
else:
  planeNode.GetPlaneToWorldMatrix(planeToWorldMatrix)

planeToWorldTransformNode.SetMatrixTransformToParent(planeToWorldMatrix)
planeNode.SetAttribute('planeNodeToWorldTransformNodeID',planeToWorldTransformNode.GetID())

worldToInitialModelMatrix = vtk.vtkMatrix4x4()
worldToInitialModelMatrix.DeepCopy(planeToWorldMatrix)
worldToInitialModelMatrix.Invert()

worldToInitialModelTransformNode.SetMatrixTransformToParent(worldToInitialModelMatrix)
planeNode.SetAttribute('worldToInitialModelTransformNodeID',worldToInitialModelTransformNode.GetID())

model.SetAndObserveTransformNodeID(worldToInitialModelTransformNode.GetID())
worldToInitialModelTransformNode.SetAndObserveTransformNodeID(planeToWorldTransformNode.GetID())

###THIS PART SHOULD ONLY BE COPYPASTED ONCE TILL MARKED
#update model's position/orientation interactively
def onPlaneModified(sourceNode,event):
  SLICER_CHANGE_OF_API_REVISION = '29927'
  planeToWorldMatrix = vtk.vtkMatrix4x4()
  if int(slicer.app.revision) > int(SLICER_CHANGE_OF_API_REVISION):
    sourceNode.GetObjectToWorldMatrix(planeToWorldMatrix)
  else:
    sourceNode.GetPlaneToWorldMatrix(planeToWorldMatrix)
  
  planeNodeToWorldTransformNode = slicer.mrmlScene.GetNodeByID(sourceNode.GetAttribute('planeNodeToWorldTransformNodeID'))
  planeNodeToWorldTransformNode.SetMatrixTransformToParent(planeToWorldMatrix)

planeNodesAndObservers = []
###TILL HERE SHOULD ONLY BE COPYPASTED ONCE

planeNodesAndObservers.append(
  [planeNode,planeNode.AddObserver(slicer.vtkMRMLMarkupsNode.PointModifiedEvent,onPlaneModified)]
)




#If you want to stop interactions for plane i, do this:
i = 0#change this number
planeNodesAndObservers[i][0].RemoveObserver(planeNodesAndObservers[i][1])



#If you want the model to world transform AFTER YOU MOVE IT for plane i, do this:
i = 0#change this number
worldToInitialModelTransformNode = slicer.mrmlScene.GetNodeByID(planeNodesAndObservers[i][0].GetAttribute('worldToInitialModelTransformNodeID'))
modelToWorldMatrix = vtk.vtkMatrix4x4()
worldToInitialModelTransformNode.GetMatrixTransformToWorld(modelToWorldMatrix)
print(modelToWorldMatrix)



#If you just want to define a frame for the model (assigned to plane i), do this:
i = 0#change this number
worldToInitialModelTransformNode = slicer.mrmlScene.GetNodeByID(planeNodesAndObservers[i][0].GetAttribute('worldToInitialModelTransformNodeID'))
initialModelToWorldMatrix = vtk.vtkMatrix4x4()
worldToInitialModelTransformNode.GetMatrixTransformToParent(initialModelToWorldMatrix)
initialModelToWorldMatrix.Invert()
print(initialModelToWorldMatrix)

@lassoan maybe this can be added to the script repository

Thank you @mau_igna_06 this is example works very nicely. It would be probably too long for including it entirely in the script repository but you can upload it into a gist and we can link it from the script repository (as we do it for other longer examples). Gists are version controlled, so you can update the gist and the same link will automatically post to the latest version (and user has access to previous versions of the gist, too).

There are also a few things to fix/improve, such as:

  • Node IDs are not persistent (when you save the scene and load it again a different ID may be assigned to the node), therefore they must never stored in node attributes. Instead, node references can be used, which maintain link between nodes, even if the node IDs change. Since parent transform is already stored using node references, you don’t even need to add another node reference.
  • Simplify the sampling. If ratio=1 then just use the model output and skip the filter. It would be also more relevant to use the VTK9 filter, as VTK8 will not be around for long.
  • Always leave a space after the # character
  • Instead of hardcoding any specific revision, put the method call in a try/except block (put the more recent API in the main path and the old API in the except path), as it is more pythonic and may be faster (not that speed would matter here, just as a general note)
  • Instead of updating the transform in the main path and then implementing it again in onPlaneModified function, just call onPlaneModified once.
  • I would remove the multiple planes support and the i variable to keep things simple.
  • Remove the THIS PART SHOULD ONLY BE COPYPASTED ONCE TILL MARKED comments. It is understood that the entire code snippet is to be executed once. It would just confuse users that they are not allowed to copy-paste some parts.
  • Comment out the …RemoveObserver… line so that if the user copy-pastes the entire code snippet the axes are kept interactive

Two notes for using principal axes of surface mesh points as object coordinate system:

  1. For solid objects Segment Statistics module gives more relevant axis directions. Principal X, Y, Z axes computed by Segment Statistics provides the object’s true principal axes of inertia, based on its volume. PCA of surface mesh points can be quite similar for many shapes, but they are not the same. Even if you sample the surface uniformly, a small feature with a large surface area can throw off the axis directions computed by surface mesh points PCA.

  2. For models that have flat sides (such as CAD models) Segment Statistics module’s oriented bounding box axis directions are more relevant than principal axes. The oriented bounding box axes will not be slightly off if you have small attachments or engravings in the object, but the axes will be generally aligned with flat sides.

All changes done except this one:

It would be also more relevant to use the VTK9 filter, as VTK8 will not be around for long.

I’ll update the gist when Slicer changes VTK version.

Thanks for the notes.

Here is the gist: Create a frame for the model in the principal components directions, set up a plane with interaction handles to move it. · GitHub

Do you know if the option UNIFORM_SPATIAL_VOLUME for the maskPointsFilter will make it return points inside the volume of the mesh? We could try that for this script later, maybe it would give more robust results…

I tried to use vtkPoissonDiskSampler to update the script that creates a frame for a model using the principal directions of the surface but I couldn’t find the filter on latest preview release of Slicer.
maskFilter = vtk.vtkPoissonDiskSampler() gives this exception: AttributeError: module 'vtkmodules.all' has no attribute 'vtkPoissonDiskSampler'

The vtkPoissonDiskSampler class was added earlier this year and is not included in SLicer’s VTK yet. This issue tracks the next VTK update.

However, uniform sampling will probably not make much difference, unless your surface has huge variations in surface area of cells and you don’t use many samples.

For significant change you could use volumetric sampling (to get true mechanical moments) or oriented bounding box (so that small indentations or extrusions do not affect the axis directions in objects with flat sides).

Hi,
I have been trying to “define a coordinate system using a markups plane node” as suggested by @Iassoan. I have unfortunately no knowledge in coding and have not used the python interactor before.
I was wondering if there is any way to generate the plane linked to the 3 defining points without using the python interactor?
Is there a module that would do this?
I mainly use 3D Slicer to plan screw positioning in vertebrae and the first step of the workflow is to manually align the bone model to the RAS coordinate system using a transform. It would be much easier to place 3 points defining the sagittal plane assuming I can then define my screw positions in that coordinate system.
Many thanks,
Guillaume

Try this:

PlaneNode = getNode('P')
PlaneNode.SetPlaneType(PlaneNode.PlaneTypePlaneFit)

And restrict yourself to using 3 points while creating the plane.

1 Like

Thanks @mau_igna_06
I am afraid I am not sure I know how to.
Is there a basic tutorial on the steps to run a script like this?
I don’t want to waste your time as I have no experience at all in any scripting/coding so maybe I need to start somewhere else?

I tried to copy and paste these 2 lines in the python interactor using an empty Slicer scene, press enter and obtained a few error lines:

PlaneNode = getNode(‘P’)
Traceback (most recent call last):
File “”, line 1, in
File “/Applications/Slicer.app/Contents/bin/Python/slicer/util.py”, line 1312, in getNode
raise MRMLNodeNotFoundException(“could not find nodes in the scene by name or id ‘%s’” % (pattern if (isinstance(pattern, str)) else “”))
slicer.util.MRMLNodeNotFoundException: could not find nodes in the scene by name or id ‘P’
PlaneNode.SetPlaneType(PlaneNode.PlaneTypePlaneFit)
Traceback (most recent call last):
File “”, line 1, in
NameError: name ‘PlaneNode’ is not defined

I suspect I need a plane object in my scene maybe, which should be named P? Or is the P node the Fiducial list which is F by default in my 3D Slicer version? Do I need 3 points in the scene before I run the code or after? Will the plane appear as I place the 3 points? Is there any other prerequisite in the the scene before the script is run? Like a 3D object or a CT study?
Very sorry if these questions are extremely basic. I completely understand if this scripting module is meant for people with basic programming skills and maybe I should stick to the modules already created.
Guillaume

I suspect I need a plane object in my scene maybe, which should be named P?

You have to do that, yes.

Do I need 3 points in the scene before I run the code or after?

You should have drawn a plane before running the code I think

Is there any other prerequisite in the the scene before the script is run? Like a 3D object or a CT study?

No

planeToWorld = vtk.vtkMatrix4x4()
getNode('P').GetObjectToWorldMatrix(planeToWorld)

This part does not appear to work for me, Plane node is named “P”, however i cant get the matrix.
Has something changed or am i doing something wrong?

thank you

I’ve just tested this in the current stable release (5.0.3) and it works well. You can print matrix values:

>>> planeToWorld = vtk.vtkMatrix4x4()
>>> getNode('P').GetObjectToWorldMatrix(planeToWorld)
>>> print(planeToWorld)
vtkMatrix4x4 (000001D1F157F7B0)
  Debug: Off
  Modified Time: 4320154
  Reference Count: 1
  Registered Events: (none)
  Elements:
    1 0 0 87.172 
    0 1 0 -74.5663 
    0 0 1 -75.25 
    0 0 0 1 
1 Like

Thank you so much, i still make a lot of small misstakes so i truly appreciate you helping me. I dont know how i did it but i messed up my print command.

Last question, what would be the best way to measure the model registration from point 1? could i transform the model to the world coordinate system using the inverse of the matrix i now have to allign my model with the world coordinate system?

again, thank you.

I figured i might aswell try, took the inverse matrix and it worked. no more help needed for now, thank you!

1 Like

Apologies for not following up on this thread after guidance provided. I only have limited opportunities to work on this project and it has been several months since I have tried again.
The recent discussions motivated me to try again and I have made progress. The reason I could not figure out how to use mark-up plane nodes is that I was using an older version of Slicer (4.11) and I believe that function was not fully available then.
I am now able to to generate the plane with 3 points and could obtain the transformation matrix as suggested.
My residual problem is that transformation seem to cause a 90 degree rotation around the A axis when I try to align my model to the world coordinate. I am not sure I understand why? Maybe it has to do with the way the plane coordinates are defined?
I am attaching a screen shot so you can visualise what I am talking about (green model before transformation, red model post transformation).

Anyone knows what I am doing wrong?

hey!
You can see if this helps:

Your point 1 to point 2 is your positive X-axis, which should correspond to the LR axis in Slicer.

Thanks @jegberink
This explains well my problem. I am not sure how to resolve it though.
The way I want to use this coordinate system is by defining 3 points in the Sagittal plane so unfortunately I don’t have the option to use point 1 and 2 in the left to right direction. I want to use this to plan dog’s neurosurgeries so in the spine the easiest plane to define is sagittal (plane of symmetry). I could however use point 1 and 2 in either the inferior to superior or posterior to anterior direction.
Would there be a way to modify the code so I get a transform matrix in the proper orientation?
Otherwise I guess I would have to add a 90 degree rotation manually so I get the proper anatomical alignment.

I had the same problem, a 90 degree transform is the easiest solution. Create a linear transform and rotate it 90 degrees towards the the axis you prefer.
Do keep in mind that point 1 will remain in 0.

Good luck!

1 Like

Thanks,
The method is working now.
I would like to find a way to simplify it in the future but need to explore other steps in my workflow to decide the best way forward. I suspect setting up a module will be the end goal.

I have doubts. I registered two objects in the world coordinate system and got a matrix 1. Then I can know how an object moves and rotates in the world coordinate system to register with another object. If I create a local coordinate system using the makeupplane node and want to know the movement in the local coordinate system, How do I deal with these two matrices next, as shown in the figure, is it right?
mmexport1668071826179