Registration of animated model against 4D multi-volume sequence

We have a nice, smooth, 24fps animated 3D polygonal heart - based on a single human patient but artist-optimised. We use this in an external realtime 3D application but need to adjust it for each real patient based on CT scans.

We’ve seen from @lassoan some demonstration of the powerful registration capabilities in Slicer but how might this apply to an animated model, and a multi-frame CT dataset? I expect heart animation in our 3D model would be a set of static keyframes we interpolate between but whether those keyframes map 1:1 to timepoints in the CT scans is unlikely.

Manually registering each keyframe totally separately would be possible but hugely time consuming. Would Slicer allow us to perform gross manipulations (position and rotate the whole model, crude scaling) across all key-frames, and then fine-tune each keyframe/pose against a given time-point in the CT scan?

Hmm, that’s a tough one. I don’t think there are any existing tools to do exactly that, but what I’d suggest is exploring a landmark based thin plate spline deformation of the model to match the CT. Probably the first step would be to do a time base correction so that the animated model matches the patient’s cardiac cycle and then make correspondences between important anatomical structures. You’d never expect the artist’s model to exactly fit the patient but you could probably make it look realistic. Definitely some programming work involved.

Thanks Steve.

We’ve seen the landmark approach (I think) used with a single volume and a
single 3D model and it looked probably good enough for cases where the
basic topology of the hearts were equivalent, clearly not cases with
different structures but deforming the 3D heart’s mesh without cutting it,

What I was unsure about is where we have a lot of frames on our 3D heart -
basically 24 different models at an extreme. Avoiding duplication of work
would be key here, e.g. applying changes to all versions at once while
keeping their differences. I wondered if the deformations are done as some
sort of 3D spatial grid-deformation e.g. mapping a linear 3-space to a
warped one. Then the same grid deformation could be applied to every
animation frame.

Of course there might be per-frame tweaks but being able to do a first pass
in such a fashion would be a huge boon!

Hi John -

Yes, the transforms are hierarchical, so you could apply a bulk linear or nonlinear transform to get the artist’s heart model in roughly the right spot and then compose that with a per-frame deformation to adapt to differences in the motion and you could then animate that. It would definitely be possible to get an approximation with that method.

Hope that helps,