Osteotomy Planner v2.0

Hello,

I was testing out the osteotomy planner module but seems like version 2.0 is different from the one shown in this video.

I can easily create plane cuts, curve cuts, and perform local transforms, but I am not able to perform any bending on the bone segments. I hate to say it, but this is like the one feature I am looking forward to the most :sweat_smile: :joy:

The module also does not have some of the basic visibility and model selection options shown in the original video, but I am guessing the module is currently being re-worked.

Is there a prior build I can use?
Do I need to downgrade my Slicer version? (currently on the 4.13.0-2021-05-22)

Thank you!

Hello,

I wanted to come back to this in case anyone was interested into craniosynostosis planning. Hopefully someone will find this useful!

My original goal was to re-create the workflow presented by García-Mato et al., but the Kitware Osteotomy Planner was not working as expected. Either way, I realized later that I personally like using the Transforms module more than using the local transformations/interactions the Kitware Osteotomy Planner uses.

Step-by-step:

  1. I used the Dynamic Modeler to recreate the Osteotomies as indicated by the Surgeon. I mostly used plane cuts for the osteotomies.

  2. For relative translations of the bone segments I used the Transform module.
    figure_orbital_advancement
    Advancement of the Orbital Segments, Superior View

  3. For local rotations, I had to create my own Axis of Rotation or Pivot Axis by creating a vertical Line on the midsagittal plane and coincident with the anterior edge of the advanced orbital segments. To perform the actual rotation, I then used this script from the Slicer doc.

figure_anterior_view_orbital_rotation_about_midsaggital_axis
Local Rotation of the Left Orbital Segments, Superior View

  1. I kept repeating steps (2-3) to recreate the motions and rotations indicated by the Surgeon. This required two additional pivot axes. We generated two different positions of the Bandeau, one more aggressive than the other.

figure_bandeau_compariosn_duperior_view

  1. Experimental: Finally, we overlaid the Bandeau onto the soft-tissue segmentations to “see” the degree of improvements on the orbits. Would be nice to do something with displacement maps! (working on this now, will update later)

Challenges moving forth:

  1. For every rotation I had to use the script from the doc. Is there a more elegant way of indicating a local rotation on the Transforms module? Is the solution to convert the script into a Python module/function that I just call every time? (instead of copying and pasting)

  2. Is there a way of “linking” or “creating dependencies” on the transformations? My goal is to update the initial advancement (for instance) and allow for slicer to make all of the updates downstream, does that make sense?

  3. Is there a way of using a Displacement Map generated from Model to Model Distance to then warp another model, say the soft tissue in step 5?

Thank you!

2 Likes

Hi -

The answers to all three questions is yes, with some effort and programming. The first one is probably the easiest, if you can define a workflow and user interface that would simplify the interaction you could make a scripted modules that exposes exactly the controls needed. This module could also enforce dependencies or manage a transform hierarchy like you mention in your second question. It’s also pretty easy to define warping transforms based on landmarks. There is a lot of example code for all of these things.

If you aren’t able to learn or do the programming yourself, perhaps you can find a colleague or student who can help or you an hire a programmer with the right skills to implement what you need.

1 Like

Hi,

Sorry that I missed your previous messages. We are in the process of pushing updates (including bending) to the Osteotomy Planner, so they should be available in the nest week.

1 Like

@Sam_Horvath,

Thank you! I will keep an eye for the updates. I have another case coming up!

@pieper,

Thank you for your response!

The answers to all three questions is yes, with some effort and programming. The first one is probably the easiest, if you can define a workflow and user interface that would simplify the interaction you could make a scripted modules that exposes exactly the controls needed. This module could also enforce dependencies or manage a transform hierarchy like you mention in your second question.

I figured I would have to build something, just wanted to know if someone had already looked into it and save me some time :rofl:

It’s also pretty easy to define warping transforms based on landmarks. There is a lot of example code for all of these things.

So I tried this the other day, but it did not work as expected. I believe (could be wrong) the warping transform is affected by the number of fiducials used. I felt like I was using fiducials on arbitrary points on the model’s surface instead of landmarks, just so the warping transform wrapped around that feature better. Does that make sense? I am going to re-test this and create a new post.

@Fluvio_Lobo it sounds like there might be a lot of what you need in the code that Sam is going to push, so it’ll be worth testing and looking through that. Perhaps you can contribute any extra features to that code base. It’s true that warping transforms (e.g. thin plate spline) are complex and usually you need to carefully match landmarks. Using a lower dimensional transform might work better.

I agree. While developing the “Curved Planar Reformat” bending module (currently in Sandbox extension) I spent quite a bit of time with this. First I used thin plate spline (TPS) and b-spline transform and the forward transform worked mostly OK, but the inverse computation was very unstable. Then I switched to grid transform (same as b-spline transform but uses linear interpolation between grid points) and the inverse computation has become stable for all reasonable inputs.

Inverse computation is very important because when you warp models or markups then you need the forward (a.k.a. modeling) transform but when you warp an image then you need the inverse (a.k.a. resampling) transform. Inverse transform is also useful so that you can synchronize any information between the original and warped space (you can specify landmark points, cutting curves, etc. in either of the spaces and transform it to the other).

@lassoan, @pieper,

Below is my approach at using Fiducial Registration to create a Warping transform and deform the skin of the model to match the motion of the osteotomies.

  1. I appended the model of the Skull to both the Original Orbital Bandeau and the Advanced/Modified Bandeau, creating a “Source” and “Target” model. My goal here is to ensure that the transformation requires a deformation of the model, rather than simply resulting in a translation.

source
target

  1. I added “From” and “To” Fiducials to both the “Source” and “Target” models, with some of these coinciding on the Skull


  1. I generated a “Warping” transform using the Fiducial Registration Module and applied it to a model of the patient’s soft tissue.


This seemed to work but I believe it is influenced by areas with more fiducials. The right orbit of the skull model has like 2-3 more fiducials and there seems to be more deformation on that side. A simple overlay of the two models shows this.

So, my questions are:

  1. Is this a valid approach?
  2. Is this an efficient approach? or, is there a more generalized solution?
  3. Is there a way of using the surface vertices of the models as fiducials?

Thank you!

Hi @Fluvio_Lobo , thank you for sharing the information and the workflow. I find this very helpful. Right now workflow I feel a bit too complicated for me to use but it get the work done. Once you have this workflow mastered please share it with us.

Few of my comments if it is any help for you.
About the soft tissue prediction, are you applying the same amount of transformation to the soft tissue as the bone segments? Usually, the soft tissue response is less than the hard tissue movement and generally around 50 % but this depends on the area of the face. So will it be possible to adjust to this?

Is it not possible to implement a proportional transformation from the fiducial points to the skin?

output

Thank you once again for sharing.

Hey @manjula! Glad you found this useful. I am trying to make some adjustments and update this post. My goal is to get something a little more streamlined before the next procedure.

Few of my comments if it is any help for you.
About the soft tissue prediction, are you applying the same amount of transformation to the soft tissue as the bone segments? Usually, the soft tissue response is less than the hard tissue movement and generally around 50 % but this depends on the area of the face. So will it be possible to adjust to this?

Part of the discussion on this step of the workflow is here. I don’t have anything new at the moment but you are correct, a straight warping transformation wont yield an accurate result. At this point I am mostly experimenting. I will likely end-up using an FEA solution to really simulate the deformation of the skin.

Is it not possible to implement a proportional transformation from the fiducial points to the skin?

Do not know enough yet but I do not see why not. You can probably scale the transform matrix programmatically.

Hi @Fluvio_Lobo can you give a little background on your goals? Will you be doing this frequently?

@pieper,

I currently support the CMF team at Orlando Health (Orlando, FL). We are hoping to bring planning and 3D printing in-house, which we are already in the process of doing.

CMF surgeons wanted to explore planning and printing solutions for Craniosynostosis cases. Specifically two procedures so far: Cranial Vault Remodeling (CVR) and Fronto-Orbital Advancement (FOA). There are more, but these are the ones I have worked on.

Hi @Fluvio_Lobo can you give a little background on your goals?

My main goal is to establish a workflow for planning and 3D printing of anatomical replicas resulting from said planning. For instance, a printed model of the Orbital Bandeau that was modeled through the planning.

Here is a picture of what that looks like;

A secondary goal is to estimate the post surgical appearance of the patient’s skin after the advancement. This is the reason why I am playing around with warping transforms and will soon have to use FEA. I still want to explore a non-realistic warping method as “guesstimation” :rofl:

I am working on a few tricks based on your feedback and other feedback I got here. I have not had a chance to test/update this post, but will do before then end of the week

Will you be doing this frequently?

Depends on your definition of frequently.

The latest estimate is 8-12 craniosynostosis cases per year (2-3 for FOA). The CMF crew has another FOA and two CVRs this summer. I think 8-12 and 2-3 are underestimates.

These procedures do not have the volume of say Orthognathic surgery and this is the reason why it is so underserved by commercial applications.

1 Like

Thanks for the extra background @Fluvio_Lobo, yes, that answers my question. I was curious if this was a one-time test or something you would be doing multiple times and it does sound like something where some time investment on custom development could great improve the workflow and accuracy of the analysis and hopefully the surgery too.

I worked on a somewhat related topic for my PhD some years ago so I’m really interested in seeing what can be done with the current generation of hardware and software together with all the new interactivity in Slicer (VR, XR, etc).

1 Like

I was curious if this was a one-time test or something you would be doing multiple times and it does sound like something where some time investment on custom development could great improve the workflow and accuracy of the analysis and hopefully the surgery too.

I am trying to identify procedures that I think need workflow optimization and maybe a custom solutions. Craniosynostosis seems to be one of those procedures.

I worked on a somewhat related topic for my PhD some years ago so I’m really interested in seeing what can be done with the current generation of hardware and software together with all the new interactivity in Slicer (VR, XR, etc).

This is really cool work and waaaay ahead of its time! Those pictures made me think of N64 Goldeneye :rofl:

I still need to make some time this week to work more on the skin warping and hopefully simulating. I am hoping to update this post with findings and I may tag you if that is ok!

Yes, definitely - very interested to see this develop!