We have a nice, smooth, 24fps animated 3D polygonal heart - based on a single human patient but artist-optimised. We use this in an external realtime 3D application but need to adjust it for each real patient based on CT scans.
We’ve seen from @lassoan some demonstration of the powerful registration capabilities in Slicer but how might this apply to an animated model, and a multi-frame CT dataset? I expect heart animation in our 3D model would be a set of static keyframes we interpolate between but whether those keyframes map 1:1 to timepoints in the CT scans is unlikely.
Manually registering each keyframe totally separately would be possible but hugely time consuming. Would Slicer allow us to perform gross manipulations (position and rotate the whole model, crude scaling) across all key-frames, and then fine-tune each keyframe/pose against a given time-point in the CT scan?