I have two chest CT scans captured at different time points (ex. inspiration and expiration) from which I segment the same anatomical structure (ex. airways). Given these two instances of the same object (with slight differences, so no obvious one-to-one correspondence), is it possible in Slicer to interpolate between the two shapes (ex. get instances of the deforming airway between inspiration and expiration)?
I looked around on discourse and online and didn’t find an obvious way to do this (I apologize if I missed something). Any pointers (or if this is a naive question, an explanation of why this is not a reasonable thing to do) would be greatly appreciated.
You can register the two time points using SlicerElastix or SlicerANTs extensions. You get a full displacement field as a result, which tells exactly where each voxel go between the two images. You can compute statistics of the displacement values, warp one image and/or segmentation to match the other image, etc.
What information would you like to get from these images?
Thanks for the pointer. I used Elastix to get the displacement values (and referred to this post and this post for additional tips). I also tried the SegmentRegistration module directly on the segmentations, but the Elastix output on cropped versions of the original images looks better.
I am able to apply the displacement vectors, but am getting stuck downstream. Here is my current workflow:
(a) I get the KJI coordinates of my segmented object from timestep1 (airway at expiration)
(b) I convert to IJK and then to RAS using the IJK-to-RAS matrix from timestep1’s labelVolmeNode
(c) I apply the deformation displacement to each voxel from its corresponding voxel in the displacement field
(d) I convert the transformed RAS coordinates to IJK using the RAS-to-IJK matrix from timestep2’s volume (airway at inspiration)
(e) I convert to KJI and write the matrix as an image (with some modification to handle Q1 below)
Q1: Some of the values I am getting from step (e) for the KJI coordinates are outside the boundary of timestep2’s image. Why would this be happening? When I convert from RAS to IJK using timestep1’s RAS-to-IJK matrix they are also outside the boundary (with some negative KJI indices).
Q2: I am following the code from the script page to convert from RAS to IJK, but the conversion to int causes my image to get chopped up between slices. I can recover the segmentation by filling holes, but is this expected?
I want to create intermediate instances of the deforming airway between the inspiratory and expiratory airway segmentation instances.
I hope that all makes sense. Thanks for your help, I really appreciate it.
That is not a problem but a feature. In order to transform images you need resampling transform (fixed to moving), to transform everything else - models, point sets, etc. - you need the modeling transform, which is the inverse of the resampling transform (moving to fixed). You can only compute the inverse of a transform if it is continuously changing around that position, so you must be able to extrapolate the transform outside the fixed image’s domain. In fact, Slicer uses VTK transforms, which can extrapolate all transforms to the entire 3D domain.
I don’t understand what you mean by “chopped up between slices”. You don’t need to apply a transform to a node by iterating through all the points of a model or all voxels of an image yourself (as it would be error-prone and extremely slow). Instead, you can set a transform node as parent to a node and Slicer takes care takes care of everything.
The easiest solution is to use linear interpolation. For the bulk rigid transform component you can use SLERP interpolation for the orientation and linear interpolation for the position. You can represent all the remaining warping transform as a grid transform (displacement field) and you can interpolate it using scaling between 0.0 (starting point) to and 1.0 (endpoint).
I attached an image to help explain what I meant by chopped up. I am guessing this might partly be because there is no one-to-one correspondence between the two shapes since there is a deformation going on between the inspiratory segmentation which is larger and the expiratory segmentation?
How can I go about doing this for the Elastix transform? I read some other posts that talk about the “Split” functionality in Transforms, but from my understanding this can only be applied to composite transforms (after hardening?).
I don’t want to just set the transform node as a parent because I want to be able to interpolate through the transformation. In step (c) from my previous post, I had a scaling factor I was applying to the displacement field (from 0 to 1) to simulate this interpolation. I am doing this all in Python and it wasn’t prohibitively slow (was roughly 1-2 seconds).
I see, the image is “chopped up” because you are trying to transform an image by transforming each voxel using the modeling (moving to fixed image) transform. Using the modeling transform is the correct way to transform points of a model and most other types of objects, except images. Images must be transformed using the resampling (fixed to moving image) transform, which provides for each output image voxel position the corresponding voxel position in the input image. Normally you don’t even need to think about this, because when you apply a transform to a node Slicer computes and uses the modeling or resampling transform.
You can do this using Elastix in two steps:
Compute a rigid transform and get the result as a matrix (disable “force grid output transform” in advanced settings)
Harden the transform on the moving image
Compute deformable transform and get the result as a grid transform (enable “force grid output transform”)
It wasn’t slow because you only transformed a small fraction of the voxels, without proper interpolation. Transforming an image properly (computing interpolated value for each voxel) takes 10-100x more operations, but if you use one of the image resample modules then the computation time will be about the same. Note that although you scale the two transform components separately, you can use the composite transform to transform the input image all at once.
Thank you very much for your responses and explanations for each of those items. I will work through everything and write back in a couple days (hopefully with good results).
Thanks. I have been able to get the rigid transform and the deformable transform.
why is that wrong? I thought I could use the images to extract the deformation field, but then apply it only to the airways. I am not interested in getting the image states between start (exhalation) and end (inhalation), just the segmented airway states along that deformation spectrum (the figure from this paper I found might help show what I mean):
when i use this workflow but with the inverse transform I get the following result (blue == exhalation, green == 50%, tan == inhalation) and the deformed segmentation is less chopped up (screenshot below). Is this conceptually wrong?
It is wrong because the output can have many gaps in it.
To transform an image, you need to iterate through all the voxels of the output image and fill it with voxel values you get from the voxel position obtained from the resampling transform.
I understand why that is wrong. Thanks for bearing with me. (I am putting a few links to resources I found most useful in understanding this for anyone who might come across this discussion: link1, link2, link3).
Based on those links and some of the notebooks in the SimpleITK Github, I have an implementation that reads in the fixed and moving images, reads in the displacement field (saved from running Elastix in Slicer) and multiplies it by my desired interpolation scale (0.0 to 1.0), converts it to a DisplacementFieldTransform, and then generates a ResampleImageFilter using the transform. I then execute the resampler on my moving image to get my desired image.
From my current understanding, this new implementation addresses the errors I was making before. Does this sound reasonable? I am using the displacement field that I get by doing a direct nonrigid registration instead of breaking things down into the rigid transform component and the warping transform component like we discussed a couple posts back. Do I need to break things down and treat those independently?
I am learning a lot working through this. Thanks for all of your time and help.