Computing airway motion from 4DCT

Hi, I’m attempting to compute the motion of a pediatric airway from 4DCT images. Eventually, I’d like to use the displacement fields to control the motion/location of the segmented airway surface in a moving mesh CFD simulation. I’ve been attempting to perform a b-spline registration with the Sequence Registration module, but with limited success: the displacement vectors are not capturing the airway motion. I believe the issue is that the motion i’m trying to capture is very small wrt the entire image - 95% of my image remains in place and only the airway walls are moving. I’ve been playing with the registration parameters but haven’t found a combination that works yet. My questions are:

  1. Is this Sequence Registration Module the right approach for what I want to do?
  2. What parameters should I be adjusting to make sure the registration captures motion in a small area? So far I’ve made the GridSpacing equal to a single voxel and attempted to use every point in the optimization/cost function instead of a random selection (This is obviously brute force and unfortunately exceeded the memory of my laptop).
  3. Is it possible to provide a mask before performing the registration, or crop the full image? I know you can use a mask with Elastix but it doesn’t look like it’s an option inside Slicer.

Operating system:Mac OS Catalina
Slicer version: 4.10.2

Yes, SequenceRegistration extension should be able to nicely do this, providing a 3D displacement field changing in time.

The default generic registration preset should work. Setting grid spacing to a single voxel would make the registration very unstable and tremendously slow, so don’t do it.

As with all intensity-based image registration tasks, you need to crop the volume to the minimum necessary region, with a small margin. The computed displacement field then can be applied to the entire image.

After cropping, masking is very rarely necessary. It is only useful if you want to exclude a small but very prominent object from the registration, such as a fiducial marker, implant, or surgical tool. Trying to exclude larger regions using masking (instead of cropping) leads to slowdowns and problems in random image sampling. Masking option is available in SlicerElastix module, so you can experiment with it there (between two selected frames). Masking is not yet exposed on Sequence Registration module GUI, as it is so rarely needed, but if you confirm that you need it (registration does not work without it, and works well if you use it) then we can add it.

Thanks for the responses. After cropping the volume, the displacement fields and mapped images look much more reasonable.

I’d now like to compute/interpolate the displacement values to specific control points on a surface mesh (to start, having the displacement at every surface mesh node should work). After reading around in the support section, it seems this is pretty straightforward (Registration Deformation Export). I’m assuming that I can import the full sequence of displacement fields, and compute an array of displacements at each mesh point and then export in vtk? I’d also like to do this in the python environment. Can you point me in the right direction of the functions or documentation for this?

Also, just confirming that each displacement vector is computed wrt to the first/reference image and not the previous image?

Yes, it is straightforward. You can apply the computed transforms to any node - model (surface mesh), markups (point list, curve), image, etc.

All features of Slicer are available in Python. You just need to run the scripts in the Python environment provided by Slicer.

Yes, sequence registration module computes transformations to/from a chosen reference image.

Everything should be fairly straightforward. For an introduction to using Slicer features from Python, you can check out Slicer programming tutorials.

I’m running into issues extracting the displacement vector at the individual mesh locations, and I think it has to do with the fact that I’m trying to extract data from a sequence transform.

I’ve converted the Transform to a vector volume using the convert function in the Transform module. I use the original, unregistered MulitVolume sequence as the reference volume in the convert function. I’ve saved the displacement volume as .nrrd and re-loaded as a volumeSequence. I then use the Probe Volume with Model module to extract the displacement vector at each mesh point, but this only gives me a single scalar value at each point, not the vector. When I visualize the 3D slices, they are black and white and I can step through 3 displacement images which i assume are the three vector components. (If i was looking at a sequence of displacement vectors from the full sequence registration, there’d be many more time-steps). When i load the displacement image as a single volume, the slices display as multi-color images with 3 vector components at each point, but the volume does not show-up as an option to extract from in the Probe Volume with Model module. Thoughts?

I’d like to extract the displacement values from each timestep of the sequence transform. It seems that that convert and probe tools are only set-up for a single volume. I can loop through this process if that’s the case (once I can extract the vector displacement values at each point).

Yes, you need to write a short Python script that iterates through the sequence. See for example how the Crop volume sequence module.