How to load dense DVFs (from VoxelMorph) to Slicer

Hello,

I have my whole pipeline implemented in Slicer (sequence registration (elastix) and application to surface models). I have ~20 patients and 2k volumes per patient (liver displacement due to respiratory cycle), which yields up total of 40 000 samples. I tried elastix registration for liver, however the inference time is ~7 minutes per volume, which makes it unfeasible.

I was looking into voxelmorph, since the inference time is much faster. I get a tensor of shape (1, 3, vol size) where shape 3 is a displacement vector for x,y,z coordinates. How can I load torch tensor into Slicer as TransformNode?

thanks in advance.

Why not sample some key points on the sequence and make.limear.transform approximations for that

Hope it helps

It would be great to use voxelmorph with Slicer. It should just be a matter of assigning the numpy array into a vtkMRMLGridTransformNode. If the grid geometry matches one of the input datasets you can just copy over the IJKToRAS. You can follow this example.

If this works well it would be great to have a SlicerVoxelMorph extension. It should be possible to run VoxelMorph using Slicer’s python environment.

So, vtkMRMLGridTransformNode is the output that I want to have? (the same output type as the one from Elastix?

gridTransform = slicer.vtkMRMLGridTransformNode()
gridTransform.SetName(slicer.mrmlScene.GenerateUniqueName(volumeNode.GetName()+' acquisition transform'))
gridTransform.SetAndObserveTransformToParent(transform)

To get the GridTransformNode, I need to have vtkOrientedGridTransform:

transform = slicer.vtkOrientedGridTransform()
transform.SetGridDirectionMatrix(directionMatrix)
transform.SetDisplacementGridData(gridImage)

and to get vtkOrientedGridTransform I need directionMatrix and gridImage.
directionMatrix is obtained from the volume, same as gridImage.

So, let’s say, I have a fixed volume, moving volume. I want to perform fixed → moving volume through voxelmorph, which yields me a displacement field.

Which volume should I get the directionMatrix and gridImage from? Fixed volume or moving volume?

What about this part of the code?

displacements = slicer.util.arrayFromGridTransform(gridTransform)
      for sliceIndex in range(slices):
        for row in range(2):
          for column in range(2):
            displacements[sliceIndex][row][column] = targetCorners[sliceIndex][row][column] - sourceCorners[sliceIndex][row][column]

Thanks.

I’ll try this later today. Sorry if my questions are easy to answer, however I don’t have that much experience from this domain.

Yes, the easiest thing to do would be to examine the results of Elastix and replicate it. Some things will depend on the conventions used by the registration code, such as whether the vectors in the array indicate where the moving voxels go (modeling transform) or where the fixed voxels come from (resampling transform). They are effectively inverses of each other, but the transform is not always well defined. See this post and search for similar for more info. For a modeling transform you would get the geometry from the moving volume, and for a resampling transform it would come from the fixed volume.

Effectively you just need to create the transform node and grid transform and then use slicer.util.arrayFromGridTransform(gridTransform) to get a numpy array. Unlike the dicom example, you should be able to just do a single assignment like gridArray[:] = voxelmorphArray although there may be some shuffling required if the data layout is not the same (unlikely).

The dicom example is a case where the transform has different dimensions than the volume, but if yours is a dense vector field you can just copy geometry from the input volumes. In the end what matters is that the transformation is well defined at every point in space.

Why not sample the sequencies

  • a grid transform every n samples of total, let’s.say 10 or 5% of them. And you compute a.linear interpolation of the vectors betweem each time poimt

Doing somethin likenon the example maybe

You can also look at ANTs registration library. It is available in Slicer via the SlicerANTs extension.

The 7 minute registration time is too long. You can typically get rigid registration in 10-20 seconds and deformable registration under 1 minute. Most likely you need to tune your optimization parameters to achieve registration in a minute.

However, spending your time with parameter optimization may not worth the effort, because you could reduce the overall computation time by running the registrations on many computers. You can rent hundreds of virtual machines from a cloud provider to get all your computations done in 1-2 days for a few hundred dollars.

I agree with @pieper though that if voxelmorph looks promising - at least one magnitude faster than Elastix and ANTs and provides similar quality results - then it would be nice to make it available in Slicer.

Can you recommend any good 4D intra-patient parameters for MRI liver (abdominal)? If I had satisfactory results under 1 min per MRI volume, it would be possible to do it over cloud as you suggested. Right now, it goes around 7 min/volume on a server with better CPU.

I used following parameters: https://elastix.lumc.nl/modelzoo/par0057/ in Slicer Elastix plugin.

Getting good quality results is the first and most important step. You can now experiment with all parameters that make the registration faster.

Check how long the rigid registration takes. If that’s negligible then you can focus on just the deformable step.

To make the registration faster, you can make every iteration faster by tuning the metric computation (e. g., number of samples) or do less iterations (make larger steps and/or stop earlier, for example by reducing the maximum number of iterations).

You may also find that you get equivalent results by reducing your image resolution by a factor of 2x (or even 4x). That could make the registration faster, too.

Hi @mau_igna_06 ,

I am sorry but I don’t understand that much what you suggest.

So my problem is 50k volumes of 20 patients but I have only one segmentation per patient. I need to get the segmentation of all volumes by using transform on the segmented mask.

Effectively you just need to create the transform node and grid transform and then use slicer.util.arrayFromGridTransform(gridTransform) to get a numpy array. Unlike the dicom example, you should be able to just do a single assignment like gridArray[:] = voxelmorphArray although there may be some shuffling required if the data layout is not the same (unlikely).

@pieper thanks so far! however

>>> slicer.util.arrayFromGridTransform(gridTransform)
Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/Applications/Slicer.app/Contents/bin/Python/slicer/util.py", line 1604, in arrayFromGridTransform
    nshape = tuple(reversed(displacementGrid.GetDimensions()))
AttributeError: 'NoneType' object has no attribute 'GetDimensions'

I think, the gridTransform does not have displacementGrid set (if I follow the example above).

Just a minor update.

We can update the displacement field with following approach:
1/ Register the volumes with simple rigid transform that takes just couple of seconds.
2/ Use following approach to update the displacement field.

>>> b = slicer.util.arrayFromGridTransform(getNode("Transform_1"))
>>> a = np.load('voxelmorph.npz')
>>> b[:] = a

It does work for me, however the register volume looks terrible. I don’t know why right now, but as soon as I figure that out, I’ll accept your solution in forum.

1 Like

Let’s say you got n/6 dicoms (from the same patient) to process then you need to iterate through the first one, get a transform from n_0 to n_1, from that one do the following repeat applying the transform that you got previously to n1 and calculate from n_1 to n2 and apply it to n2 and so on, n times… The timeseries that are in the middle you approximate them with linear transforms swithching the angle on traslation and the translation slightly.+

Mauro