Superimposition of CBCT of orthodontic patients using a slicer, diagnocat, blender

Good afternoon, dear colleagues. My name is Daniil, I am a dentist, orthodontist. For 4 years I have been doing superimposition before and after orthodontic treatment. I use stl files from Diagnocat and overlay them. I encountered inaccuracy in segmentation and, therefore, inaccuracy in overlay. I switched to slicer and studied the forum for a long time, I want to thank @finetjul
Julien Finet, @lassoan
Andras Lasso, @muratmaga
Murat Maga. Thanks to your answers, things are starting to work out.


About working in the slicer, first I upload two CT scans, before the start of treatment and after, then I select transform, orient the CT scan after, with my hands, then I use the General registration elastix tool, it completes the job, I confirm all the actions.

I convert the images into dicom, after treatment and upload them to the diagnocat .

Then, in a blender, I orient the CT scan along the Frankfurt and the median cranial plane, grind the teeth and take measurements. the application turns out to be acceptable, but the teeth move in the process, as does the lower jaw, and the General registration elastix tool, as I understand it, pays attention to all structures, questions arise



1 Is it possible to make comparisons for certain areas, for example, sphenoethmoidal synchondrosis (it is fully formed by the age of 8)

, to analyze work with growing patients, adults after teeth have been moved. 2 how to compare the lower jaw, 3 how to isolate it from the total volume, 1 select comparison zones. I hope for your detailed help, thank you

1 Like

You should be able to do everything you want in Slicer, without any other program.

  1. Use DICOM module to import your DICOMs directly into Slicer.
  2. Use this simple script to align any volume with three points that define Frankfort horizontal (S_2019/Lab08 at master · SlicerMorph/S_2019 · GitHub)
  3. If you want to restrict your registration to a specific region, you can either crop them with ROI and only use them to register. Or define a mask using segment editor and specify masks during registration process.

Not sure what and how you would like to compare things? I couldn’t quite follow what you want to do.

good afternoon, thank you very much for your answer @muratmaga
Murat Maga. I’m very glad that I can do everything in the slicer, it’s important to me.

I would like to make an overlay over a selected area to look at how jaw growth has changed in children or how the position of teeth has changed in an orthodontic patient after treatment.I imagine it like this, I need 1.0 - load two CT scans, 1.1 segment the selection zone of the CT scan before (so that the comparison is based on bone landmarks) 1.2 compare the CT scan after, with the CT scan before, according to the segmentation zone, 1.3 save. Could you tell me 2 whether I present the algorithm correctly, 3 whether it is possible, if so, 1 it would be useful for me to describe in detail how and what tools should be used, maybe. I have little experience with slicers, thanks

If you have 6-7 landmarks, you don’t even need to segment anything or define an ROI. Place the same set of landmarks on both volumes. Then use the Fiducial Registration Wizard from the SlicerIGT extenstion to create a Rigid Transform. Finally put the volume which you define as the source landmark under this transform. The volumes will be aligned using those landmarks as constraints.

1 Like

good afternoon, @muratmaga ,everything worked out, thank you very much. Did I understand correctly that ROI can be used like points?

I am glad things worked out. ROIs come into play if you are going to do image registration. Then the contents of the ROIs is used as the constraint for the registration. Otherwise, if you are going to stick to landmarks, no need for ROI.

1 Like

Hello, thank you for your answer, I have a problem with orienting DICOM according to Frankfort plane , I did all the steps that you have mentioned at the GitHub, I copied and pasted the script but nothing happened,
this is what shown on the python console

Sorry there has been an API change since this snippet was written. Try this, seems to work for me using the sample data provided in Slicer 5.6.1

import numpy
scene = slicer.mrmlScene
F = getNodesByClass('vtkMRMLMarkupsFiducialNode')
F = F[0]

# Get the fiducial IDs of porions and zygomatico - orbitale  suture from the fiducial list by name
po1_id = -1; po2_id = -1; zyo_id = -1;

for i in range(0, F.GetNumberOfControlPoints()):
	if F.GetNthControlPointLabel(i) == 'poR' :
		po1_id = i
	if F.GetNthControlPointLabel(i) == 'poL' :
		po2_id = i
	if F.GetNthControlPointLabel(i) == 'zyoL' :
		zyo_id = i


# Get the coordinates of landmarks
po1 = [0, 0, 0]
po2 = [0, 0, 0]
zyo = [0, 0, 0]

F.GetNthControlPointPosition(po1_id,po1)
F.GetNthControlPointPosition(po2_id,po2)
F.GetNthControlPointPosition(zyo_id,zyo)

# The vector between two porions that we will align to LR axis by calculating the yaw angle
po =[po1[0] - po2[0], po1[1] -po2[1], po1[2]-po2[2]]
vTransform = vtk.vtkTransform()
vTransform.RotateZ(-numpy.arctan2(po[1], po[0])*180/numpy.pi)
transform.SetMatrixTransformToParent(vTransform.GetMatrix())

# Apply the transform to the fiducials and the volume
transform = slicer.vtkMRMLLinearTransformNode()
scene.AddNode(transform) 
transform.SetMatrixTransformToParent(vTransform.GetMatrix())
V = getNodesByClass('vtkMRMLScalarVolumeNode')
V = V[0]
V.SetAndObserveTransformNodeID(transform.GetID())
F.SetAndObserveTransformNodeID(transform.GetID())

# Get the updated (transformed) coordinates from the list
po1 = [0, 0, 0]
po2 = [0, 0, 0]
zyo = [0, 0, 0]

F.GetNthControlPointPosition(po1_id,po1)
F.GetNthControlPointPosition(po2_id,po2)
F.GetNthControlPointPosition(zyo_id,zyo)

# Apply the transform to the coordinates
po1 = vTransform.GetMatrix().MultiplyPoint([po1[0], po1[1], po1[2], 0])
po2 = vTransform.GetMatrix().MultiplyPoint([po2[0], po2[1], po2[2], 0])
zyo = vTransform.GetMatrix().MultiplyPoint([zyo[0], zyo[1], zyo[2], 0])
po =[po1[0]-po2[0], po1[1]-po2[1], po1[2]-po2[2]]

# Calculate the rotation for the roll 
vTransform2 = vtk.vtkTransform()
vTransform2.RotateY(numpy.arctan2(po[2], po[0])*180/numpy.pi)

# Apply the transform to previous transform hierarchy
transform2 = slicer.vtkMRMLLinearTransformNode()
scene.AddNode(transform2) 
transform2.SetMatrixTransformToParent(vTransform2.GetMatrix())
transform.SetAndObserveTransformNodeID(transform2.GetID())

# Get the coordinates again
po1 = [0, 0, 0]
po2 = [0, 0, 0]
zyo = [0, 0, 0]

F.GetNthControlPointPosition(po1_id,po1)
F.GetNthControlPointPosition(po2_id,po2)
F.GetNthControlPointPosition(zyo_id,zyo)

# Apply transforms to points to get updated coordinates
po1 = vTransform.GetMatrix().MultiplyPoint([po1[0], po1[1], po1[2], 0])
po2 = vTransform.GetMatrix().MultiplyPoint([po2[0], po2[1], po2[2], 0])
zyo = vTransform.GetMatrix().MultiplyPoint([zyo[0], zyo[1], zyo[2], 0])
po1 = vTransform2.GetMatrix().MultiplyPoint([po1[0], po1[1], po1[2], 0])
po2 = vTransform2.GetMatrix().MultiplyPoint([po2[0], po2[1], po2[2], 0])
zyo = vTransform2.GetMatrix().MultiplyPoint([zyo[0], zyo[1], zyo[2], 0])

# The vector for pitch angle
po_zyo = [zyo[0]-(po1[0]+po2[0])/2, zyo[1]-(po1[1]+po2[1])/2, zyo[2]-(po1[2]+po2[2])/2]

# Calculate the transform for aligning pitch
vTransform3 = vtk.vtkTransform()
vTransform3.RotateX(-numpy.arctan2(po_zyo[2], po_zyo[1])*180/numpy.pi)

# Apply the transform to both fiducial list and the volume
transform3 = slicer.vtkMRMLLinearTransformNode()
scene.AddNode(transform3) 
transform3.SetMatrixTransformToParent(vTransform3.GetMatrix())
transform2.SetAndObserveTransformNodeID(transform3.GetID())


slicer.vtkSlicerTransformLogic().hardenTransform(V)

good afternoon, @muratmaga. I want to ask, I succeeded in matching, but now the question for me is how to save the new CT orientation after, how to export this new orientation from the slicer?

The last line of the code
slicer.vtkSlicerTransformLogic().hardenTransform(V)

is applying the transformation to the image. If you save it, and reload only the save image, you should see that it is preserved.

thank you so much for your reply
I still get this note on the python console,
python 2

I must have copied and pasted the code incompete somehow.

import numpy
scene = slicer.mrmlScene
F = getNodesByClass('vtkMRMLMarkupsFiducialNode')
F = F[0]

# Get the fiducial IDs of porions and zygomatico - orbitale  suture from the fiducial list by name
po1_id = -1; po2_id = -1; zyo_id = -1;

for i in range(0, F.GetNumberOfControlPoints()):
	if F.GetNthControlPointLabel(i) == 'poR' :
		po1_id = i
	if F.GetNthControlPointLabel(i) == 'poL' :
		po2_id = i
	if F.GetNthControlPointLabel(i) == 'zyoL' :
		zyo_id = i


# Get the coordinates of landmarks
po1 = [0, 0, 0]
po2 = [0, 0, 0]
zyo = [0, 0, 0]

F.GetNthControlPointPosition(po1_id,po1)
F.GetNthControlPointPosition(po2_id,po2)
F.GetNthControlPointPosition(zyo_id,zyo)

transform = slicer.vtkMRMLLinearTransformNode()
scene.AddNode(transform) 

# The vector between two porions that we will align to LR axis by calculating the yaw angle
po =[po1[0] - po2[0], po1[1] -po2[1], po1[2]-po2[2]]
vTransform = vtk.vtkTransform()
vTransform.RotateZ(-numpy.arctan2(po[1], po[0])*180/numpy.pi)
transform.SetMatrixTransformToParent(vTransform.GetMatrix())

# Apply the transform to the fiducials and the volume
transform.SetMatrixTransformToParent(vTransform.GetMatrix())
V = getNodesByClass('vtkMRMLScalarVolumeNode')
V = V[0]
V.SetAndObserveTransformNodeID(transform.GetID())
F.SetAndObserveTransformNodeID(transform.GetID())

# Get the updated (transformed) coordinates from the list
po1 = [0, 0, 0]
po2 = [0, 0, 0]
zyo = [0, 0, 0]

F.GetNthControlPointPosition(po1_id,po1)
F.GetNthControlPointPosition(po2_id,po2)
F.GetNthControlPointPosition(zyo_id,zyo)

# Apply the transform to the coordinates
po1 = vTransform.GetMatrix().MultiplyPoint([po1[0], po1[1], po1[2], 0])
po2 = vTransform.GetMatrix().MultiplyPoint([po2[0], po2[1], po2[2], 0])
zyo = vTransform.GetMatrix().MultiplyPoint([zyo[0], zyo[1], zyo[2], 0])
po =[po1[0]-po2[0], po1[1]-po2[1], po1[2]-po2[2]]

# Calculate the rotation for the roll 
vTransform2 = vtk.vtkTransform()
vTransform2.RotateY(numpy.arctan2(po[2], po[0])*180/numpy.pi)

# Apply the transform to previous transform hierarchy
transform2 = slicer.vtkMRMLLinearTransformNode()
scene.AddNode(transform2) 
transform2.SetMatrixTransformToParent(vTransform2.GetMatrix())
transform.SetAndObserveTransformNodeID(transform2.GetID())

# Get the coordinates again
po1 = [0, 0, 0]
po2 = [0, 0, 0]
zyo = [0, 0, 0]

F.GetNthControlPointPosition(po1_id,po1)
F.GetNthControlPointPosition(po2_id,po2)
F.GetNthControlPointPosition(zyo_id,zyo)

# Apply transforms to points to get updated coordinates
po1 = vTransform.GetMatrix().MultiplyPoint([po1[0], po1[1], po1[2], 0])
po2 = vTransform.GetMatrix().MultiplyPoint([po2[0], po2[1], po2[2], 0])
zyo = vTransform.GetMatrix().MultiplyPoint([zyo[0], zyo[1], zyo[2], 0])
po1 = vTransform2.GetMatrix().MultiplyPoint([po1[0], po1[1], po1[2], 0])
po2 = vTransform2.GetMatrix().MultiplyPoint([po2[0], po2[1], po2[2], 0])
zyo = vTransform2.GetMatrix().MultiplyPoint([zyo[0], zyo[1], zyo[2], 0])

# The vector for pitch angle
po_zyo = [zyo[0]-(po1[0]+po2[0])/2, zyo[1]-(po1[1]+po2[1])/2, zyo[2]-(po1[2]+po2[2])/2]

# Calculate the transform for aligning pitch
vTransform3 = vtk.vtkTransform()
vTransform3.RotateX(-numpy.arctan2(po_zyo[2], po_zyo[1])*180/numpy.pi)

# Apply the transform to both fiducial list and the volume
transform3 = slicer.vtkMRMLLinearTransformNode()
scene.AddNode(transform3) 
transform3.SetMatrixTransformToParent(vTransform3.GetMatrix())
transform2.SetAndObserveTransformNodeID(transform3.GetID())


slicer.vtkSlicerTransformLogic().hardenTransform(V)

should work :slight_smile:

thank you so much, it worked perfectly

After fiducial reg. I go to transform, there I move ct after and dots after in transformed, then apply Harden transform. I will convert the field nrrd to dicom. When I open before and after pictures in the slicer, the orientation is saved, but when I upload it to diagnocat for segmentation, the original orientation is there. I tried using confert, but I still couldn’t figure out what was what.

@maratmega answered - The fault is probably on the other software that you are using to import those modified dicoms. My guess it doesn’t read the orientation matrix Slicer saved. Try resampling the data and exporting one more time. Instead of hardening, keep it under the transform and then use the crop volume to resample the data.
https://slicer.readthedocs.io/en/latest/user_guide/modules/cropvolumesequence.html#overview

I sent him a private message @muratmaga

Good afternoon, the slicer works great for comparison, but when I load the new CT orientation into diagnocat, it’s not saved. I tried saving the new CT orientation after hard transformation and crop volume sequence, what other options are there to export the CT scan so that the orientation changes to the new one?



maybe I’m not working correctly with the Crop volume sequence, I placed the desired CT AFTER in the input and created a new one in the output

good afternoon, today I tried to use general registration (elastics), then I loaded the CT scan into the diagnocat and the orientation was changed, but the comparison itself did not suit me, maybe you know another registration method with a similar operating algorithm?

Good afternoon, today I tried ct segmentation after in blueskyplan. Everything is the same as with Diagnocad, the CT orientation remains the same