Dillation of a specific label, and pushing other label?

Dear all

This is my first post, so I would like to first thanks you for this nice community, this forum has already been a great source for me.

I am working with full brain segmentation, with tissue label (GM, WM, CSF, deep gray matter, ventricle ect…) and I would like to transform a given label map to a new one containing bigger ventricles (simulation an old subject from a young one). The difficulty is that I would like to preserve the other labels ie pushing other neighbor labels. (ideally with a bit on contraction)

if I simply dilate the csf extracted mask, and incorporate it to the original label map this will erase neighbor labels
So I was thinking to try with Simple itk DisplacementFieldTransform, if I could build a displacement field that would be initialized with the normal vector of the CSF surface, and gently smoothed.
Any help how to get the normal vector and smooth a displacement field ? (sorry if this is not the right forum to ask)

Many thanks

Romain

Operating system:
Slicer version:
Expected behavior:
Actual behavior:

Ideally you would like to have the simulation that deformed the MR to export a nonlinear transform that you could apply to the segmentation. Another alternative would be to segment using SynthSeg, which is nicely robust in the face of variations like large ventricles. Otherwise you could try a nonlinear registration tool like BRAINSFit or Elastix to calculate a transform to apply to the segmentation.

1 Like

The final objective is to deformed a given (high resolved) template to augment the labels in order to build a synthetic training set and to train a deep model exactly as proposed by SynthSeg (yes SynthSeg is nicely robust and I am a big fan of this method. Here I want to explore this approach but starting from only one template)
I agree a non linear deformation would be an alternative, although I would prefer an intrinsic method (ie without the need of a reference), and also only specific to a given label boundary (ventricles, for instance).

Just to precise the question:

I can use skimage marching_cubes methods to get the normal vector of the mask, but how to put the normal vector list into a 3D grid (let say at the original dimension) ? and how to smooth the vector field after that ?

You might find this approach helpful: Dynamic 3D Object Manipulation for surgery simulation - #2 by lassoan. The video shows how you can create a smooth nonlinear warping transformation via paired sets of markups points. You could experiment with

  1. generating points at on the ventricle surface and at the distance at which you want no motion to occur
  2. duplicating those points, and then displacing the ventricle surface points by the amount you want the ventricles to dilate and in the direction normal to the ventricle surface, while leaving the other points umoved

The resulting transformation should (to a first approximation) show no motion at your fixed points or outside them (provided you have enough fixed points in enough directions), your specified displacement at the ventricle surfaces, and some sort of smoothed interpolation of partial displacement further out. This may be good enough for your purposes, or you can try to refine it by adding more points along with the displacements you want to force at those locations.

1 Like

thank you @mikebind this looks very nice, and it fits perfectly to what I was looking for.
The only problem is that I need to do it automatically (to possibly incorporate it to torchio). Are you aware of any python script that implement those kind of transform ?

Unfortunately, I am not aware of an existing python script which would set up and generate the transform you need automatically. I looked briefly at the Fiducial Registration Wizard module code (available here: https://github.com/SlicerIGT/SlicerIGT/blob/master/FiducialRegistrationWizard/Logic/vtkSlicerFiducialRegistrationWizardLogic.cxx ), but it is in C++. C++ modules are callable from python within Slicer, and if you got the inputs sorted out, you could definitely use this module to automatically generate the transform you need, and then could use a different resampling module (also callable using python) to automatically generate the warped version of the image. What is not provided for in these steps is the generation of the sets of input points that will be used to generate the transform. You would likely need to write that code yourself (in python would be fine) as your use case seems pretty specialized. If you want to go that route, the approach I might try first is to

  1. set up a regular mesh of point coordinates which cover your image volume (a subset of these will be your non-moving points)
  2. find a set of points on your ventricle surfaces, and gather the surface normal vector for each of your points
  3. from your regular mesh of points found in step 1, remove any points which are within some defined distance from the ventricles
  4. make a copy of the set of remaining points in step 3, and make a copy of the set of points from step 2.
  5. combine one copy of the remaining mesh points with one copy of the ventricle surface points; this will be your unwarped (fixed) fiducial point set
  6. For the other copy of the ventricle surface points, displace each point in the direction of the local surface normal some defined amount (however much you want to grow your ventricles) to form a warped ventricle set of points
  7. combine the other copy of the remaining mesh points with this new warped set of ventricle surface points; this is your warped (moving) fiducial point set

The fixed and moving fiducial point sets are the key inputs you’ll need to give to the Fiducial Registration Wizard, which would then be able to calculate the warping transform you are looking for. I don’t see any reason you couldn’t get this procedure to be fully automated, but it will likely take some work setting it up and then experimenting with what is an acceptable density of points for your mesh, what is a good distance gap around the ventricles to allow deformation to occur over (the “defined distance” in step 3), what is a good density and choosing method for the number and location of ventricle surface points to use, etc.

Hi,
thanks for the detailled description, and I agree there will need some development and tunning but I thinks it worth the try, so I am motivated.
Your description of the step is very clear, and I should be able to go with it. The only question I have is about the points format. I guess it should be a python list, of (3D) coordinate but what is the unit ? Should I express those coordinate in the pixel space (integer grid from the original data) or in mm space (ie taking into account the affine of the 3D data.

Finally the more unclear part to me is how to finish the job once the Two set of point has been construct. (1) build the nonlinear transform, and 2) applied it to the image. I am not familiar with C++ module, how do I call them from python ? Probably if someone can point me the corresponding python code that Slicer use to implement it I should be able to copy paste the code I need. if it is not to big.
Probably, I would also need other example of python script using 3D Slicer tool, so that I understand, which import (lib) I need to have access to those C++ binding (may be not, if the Slicer python code is enough to build a similar function but standalone …)

Many thanks

1 Like

The points should be in mm space

I looked into the Fiducial Registration Wizard code a bit more, and I think the key function being used is provided by vtkThinPlateSplineTransform (VTK: vtkThinPlateSplineTransform Class Reference). I do pretty much all of my work within the Slicer environment, so I am not very familiar with this, but I believe you can use VTK from python outside of Slicer without any problems (Getting Started - VTK documentation). It looks like you would create a vtkThinPlateSplineTransform, set the source and target landmarks, set the basis to R (because you’re in 3D), and then Update() to calculate the transform. I’m sure there are other VTK functions which would allow you to apply that transform to a base image to generate a warped image. This might be the best approach for you if you are looking to set up a pipeline which can run automatically outside of Slicer. However, if it turns out you want to use Slicer functionality, you might also like to be aware that it is possible to access Slicer code without running the full GUI in a few different ways, and looking at this discussion might help orient and get you started: Using Slicer and Slicer Modules from Command-Line