Reconstructing a skull fragment with SSM/PDM

Hello!

I am seeking advice on how to practically reconstruct a skull in order to calculate cranial capacity.
The skull is a fragment (missing skull base and some parietal), estimated to be a 4 year old indiviual.
I have segmented complete skulls of 4 year olds from NMDID CT scans and aligned them with landmark-based alignment in Scalismo/VSC Studio.

What would be the next steps? Do I need to resample to a reference mesh to get point-to-point correspondence? Do I need to create a mean shape and warp it to fit the shape of the skull fragment? I believe the SSM approach is reliable for skull reconstruction/estimation?
How much of this can be done in slicer. I am strugging with the coding side of VSC studio, but this is what I have so far:

//> using scala "3.3.7"
//> using repository "https://scalismo.org/repo/"
//> using dep "ch.unibas.cs.gravis::scalismo:0.92.0"
//> using dep "ch.unibas.cs.gravis::scalismo-ui:0.92.0"

import scalismo.geometry._
import scalismo.common._
import scalismo.mesh._
import scalismo.transformations._
import scalismo.io.MeshIO
import scalismo.ui.api._
import scalismo.registration._
import java.io.File

object SkullRigidAlignmentApp {

  def main(args: Array[String]): Unit = {

    // Initialize Scalismo and the UI
    scalismo.initialize()
    val ui = ScalismoUI()

    // Load skull meshes
    val skullFiles = Seq(
      new File("G:\\Kikopey_project\\skull-ssm\\data\\case-119553_skull.ply"),
      new File("G:\\Kikopey_project\\skull-ssm\\data\\case-137263_skull.ply"),
      new File("G:\\Kikopey_project\\skull-ssm\\data\\case-119851_skull.ply")
    )

    val skullMeshes: Seq[TriangleMesh[_3D]] =
      skullFiles.map(file => MeshIO.readMesh(file).get)

    // Display original skulls
    val skullGroup = ui.createGroup("OriginalSkulls")
    skullMeshes.zipWithIndex.foreach { case (mesh, idx) =>
      ui.show(skullGroup, mesh, s"Skull${idx + 1}")
    }

    // Pick the first skull as reference
    val referenceMesh = skullMeshes.head

    // Define landmarks (replace with actual PointIds from your meshes)
    val landmarkIds = Seq(PointId(1000), PointId(5000), PointId(10000))
    val referenceLandmarks = landmarkIds.map(pid =>
      Landmark(s"ref-${pid.id}", referenceMesh.pointSet.point(pid))
    )

    // Align other skulls rigidly to the reference
    val alignedSkullsGroup = ui.createGroup("AlignedSkulls")
    skullMeshes.zipWithIndex.tail.foreach { case (mesh, idx) =>

      val targetLandmarks = landmarkIds.map(pid =>
        Landmark(s"tgt-${pid.id}", mesh.pointSet.point(pid))
      )

      // Compute best rigid transform using landmarks
      val bestTransform: RigidTransformation[_3D] =
        LandmarkRegistration.rigid3DLandmarkRegistration(referenceLandmarks, targetLandmarks, center = Point(0, 0, 0))

      // Apply transformation
      val alignedMesh = mesh.transform(bestTransform)

      // Show aligned mesh in UI
      val view = ui.show(alignedSkullsGroup, alignedMesh, s"AlignedSkull${idx + 2}")
      view.color = java.awt.Color.RED
    }

    println("Rigid alignment done. Check the UI for original and aligned skulls.")
  }
}

The skull fragment mesh looks like this:


Thank you!

I am not sure if statismo would be much of use here. You can take a few age matched individuals from NIDID segment the endocranial space from the. Deformably register those to your specimen, and the use that mapping to transfer the segmented endocranial space to your fragmented skull. Doing this a few different individuals will give you a range of estimates then you can decide to average or report or as a range.

Thank you for the advice as always!

How would you perform the endocranial deformations and register the deformable/mean shape to the fossil fragment? I am less versed with Python, if that is what you recommend. Can this be done within Slicer?

Thiss individual likely had craniosynostosis (via fusion of sagittal and parietal sutures) with a congenital syndrome. I think a range for CC would be best.

You can use python but don’t need to.

You can use ANTs registration to do that in Slicer. It is available through SlicerANTs and SlicerANTsPy extension (the former is only registration, the latter has more functionality but the parameters you can control for registration is not as comprehensive as the other).

In either case you want to use the SyNRegistrationQuick[s] for quick results and then just use SyN for final set of results (will take some time).

Thank you, I will look into this further.

I am assuming ANTs handles volumes (i.e., segmented CT scans) rather than 3D models of my endocasts? Can it batch process multiple at one time?

I am assuming you mean the SyN function in BRAINS General Registration:

Thank you for answering my questions, this has been difficult to conceptualise :slight_smile:

Yes, ANTs is a volumetric registration module. I assumed you segmented your skulls in Slicer. If you don’t have the original volumes, obviously this won’t work.

SyN option in the Brains Registration doesn’t work (it requires Brains to be built against ANTs, which is not done in Slicer build). You need to use the ANTsRegistration module.

If you are using ANTs extension, then you would choose QuickSyN, for quick results. Looks like ANTsPY extension has an issue blocking to be run in Slicer.

Thank you - I am currently away from my computer, but can the ANTs extension run a batch of multiple volumes at one time?

No, but ANTsPy can (it is in the groupwise registration tab).

Also, the issue turned out to be simple. ANTs and ANTsPy apparently can’t co-exist (at the moment). If you want to use ANTspy, uninstall the ANTs extension and things should work. Again you want to start with antsRegistrationSyNQuick[s] until you know things are working.

1 Like

Thank you.
I am using ANTsPy. Pair-wise QuickRigid registration was much faster than antsRegistrationSyNQuick[s] which is yet to finish (>1 hour). Is this normal? I am assuming SyN is doing both Affine and Rigid registration, and full SyN parameters will take even longer.

Just to clarify - Would you recommend deforming the endocranial cavity volumes, or the skull volumes themselves?

I will use Group-Wise to batch register my volumes, but how exactly do I go form this stage to transfering it to the fragmented skull? Thank you!

If anyone has further advice on this regarding a workflow I would be very grateful! @muratmaga @lassoan @pieper

Quick or Affine are not deformable registrations. Quick simply rotates and translates the object, affine does the same with scaling and shearing allowed. You need deformable registration to warp one object to the other, and that’s computationally expensive and long.

So, if you want to run things fast, down sample your fixed volume by 2 using crop volume (ants will automatically resample the moving one to match that). It will go faster. Or find a computer with lots of cores.

1 Like

Thank you.
This has been my procedure and parameters so far:
(1) Group-Wise Tab:

Template: case-109181_skull.nrrd.
Output: Transformed Volume.
transform Type: antsRegistrationSyNQuick[s] (~30 mins / volume).

(2) Then in the Template tab:

Initital Template: case-109181_skull.nrrd
Transform Type: antsRegistrationSyNQuick[s], iterations: 1 (I know >3 iterations would be ideal).
Select Input Images: The Group-Wise registered .nii.gz’s
Run Template Building and create new volume.

(3) Pair-wise tab:
Fixed Image: fragmented skull fossil (photo in above conversation)
Moving Image: Template .nrrd
Transform Type: SyN
Create Transformed Volume.

Would you add/change this workflow?
How do I go from the labelmap volume to segmentation to compute segment statistics/endocranial capacity/volume etc?

I recently created a module for reconstructing fractured orbit using the mirrored contralateral side built on functions from SlicerMorph.

I wonder if the two sides are similar enough to warrant a good registration, but it might not hurt to give it a quick try.

You can one-click install SlicerOrbitSurgerySimfrom the Extension Manager on 5.10. It’ll load SlicerMorph for you as well.

It has sample data for you to play with. See video tutorial: youtube.com/watch?si=wHra2VXSp__asPQt&v=t951sCvk_lc&feature=youtu.be

1 Like

I don

If you are going to use my suggested approach, which is find 3-4 age/sex matched normal skulls, and segment the endocranial space from them, and then deformably register them to your fractured one, then you don’t need to use the template workflow. Only the groupswise.

Settings would be

Fixed is the fractured skull

moving would be your intact skulls (3-4 of them)

Transform would be initially AntsRegistrationSyNQuick[s], and the output would all of them (transformed volume, forward and inverse transforms).

SyN is about 10 times slower than the SyNQuick (uses cross-correlation instead mutual informaiton) and definitely results better alignment, but your goal is to get results quick to see how things work out. So I would stick with SyNQuick until you know things works and you want the best outcome.

1 Like

Great, I will try this.
A potential issue, however, is that the fragmented skull is just an .obj surface mesh. Converting this to a segmentation → Export models and labelmaps → Export to new labelmap, creates anomalies and artefacts in the volume (bottom: surface mesh, top: labelmap volume in 3D rendering). This might be ok, but I will have to test it out.


(unless of course I manually clean this in the segment editor myself)
Or I might have better results with the Model to LabelMap plug-in?

Would you be able to fill it using something like SurfaceWrapSolidify?

How did you generate the obj? You don’t have the original scan? That’s what you will need for this to work. Even if you were to able to generate a decent labelmap by cleaning up, registration will not work, as it expects and intensity image.

That’s a huge shame! I believe it was made via photogrammetry/Artec Spider work…

We will have to CT scan the Crania when we are next in the field - but is there any possible method to use the .obj alone?

ANTs is an intensity registration framework, so it does need intensity images. You can convert all your volumes labelmaps and do a labelmap registrations….

There might be deformable mesh registration frameworks that can do what I suggested on 3D models.