Increasing ALPACA Non-Rigid Alignment

Hello,

I am currently testing ALPACA to transfer muscle attachment landmarks from one scapula to another for patient specific modeling.

The rigid alignment seems to work pretty well, but the deformable alignment is not working very well.

I have tried changing the alpha value but no success.

The meshes only have 4,000 vertices unfortunately so I was not able to test higher point densities.

Here are some screenshots. I am trying to deform the original mesh with the landmark (orangey red) to the yellow scapula. The red mesh is the rigidly transformed source model.
image

In green is the warped source mode - especially the medial border is not warping enough to align well with the yellow mesh:

image

Thank you very much,
Eva

I also noticed I get this warning/error, but not sure if that has anything to do with it:

[VTK] Generic Warning: In D:\D\S\S-0\Libs\MRML\Core\vtkMRMLSubjectHierarchyNode.cxx, line 3663

[VTK] vtkMRMLSubjectHierarchyNode::GetSubjectHierarchyNode: Invalid scene given

[Qt] void __cdecl qSlicerSubjectHierarchyPluginLogic::onNodeAboutToBeRemoved(class vtkObject *,class vtkObject *) : Failed to access subject hierarchy node

[VTK] GetReferencingNodes: null node or referenced node

[Qt] class QList<class qSlicerFileWriter *> __cdecl qSlicerCoreIOManagerPrivate::writers(const class QString &,const class QMap<class QString,class QVariant> &,class vtkMRMLScene *) const warning: Unable to find node with ID "" in the given scene.

How many points are you left with during the pointcloud conversion? (ALPACA should report the number of points). If it is too few, that might be the problem.

Try changing both alpha and beta values together. Increasing CPD iterations will probably be useful as well.

Finally, the current ALPACA module will be deprecated soon, and we will switch to an ITK based implementation (instead of open3d we are currently using). Make sure you start using, it is already available with SlicerMorph (it is called ALPACA-preview), but it is hidden.

Go to Application Setting->Developer->Enable developer mode.
then, open the module finder, and check Testing.

image

Hi Murat,

Thank you so much.
The vertices ended up being 3969 and 4693.

Increasing the iterations seemed to make the biggest difference.

Just to confirm, increasing both alpha and beta will enable more transformation, or did I get this wrong? I read the Alpaca documentation but still not entirely sure what these values are - it sounded like a lower alpha enables more deformation but I wasn’t sure if this means between deformed target and source or more deformations allowed in the transformation of the target.

When I compare alpha and beta set at 2 vs 4, 4 is generally more aligned except at the coracoid.

Here is the output with 1000 CPD iterations and alpha and beta set at 4, in the testing mode:

What is odd to me is that the transformed source mesh is smaller than the target although the source itself is larger than the target. But maybe this is due to not using landmarks to align the

If I decrease alpha to 2 and keep beta at 4 then the size marches better but the spine is less aligned

I also tried changing the CPD tolerance but it made only a slight difference and wasn’t clear if better or worse (some parts better aligned, some worse).

Is there any way I can make deform even more to match the target better? Or do I need higher resolution meshes?

Thank you so much,
Eva

In our tests, 4000-6000 pts seemed to work well. So I don’t think lack of points is concerning at this point.

Both alpha and beta control the deformable registration. Alpha controls the fluidity (lower values) vs rigidity (higher values), while motion coherence makes sure the resultant vector field is smooth. Higher beta values will ensure more smooth vector field (such that deformation vectors from nearby points will be parallel), whereas low smoothing values will allow more disparate vectors. So, try reducing them both slowly and experiment. Couple suggestions to make your experiments a bit more easy to interpret.

  1. Annotate about 4-6 landmarks both on your source and target model in regions you care. If you input the optional target LMs, in the interactive mode, ALPACA will calculate a RMSE (root mean square error) between landmarks. As you change parameters, you want to reduce this value from the previous iteration.

  2. Try “skip scaling”, if your scapula are of the similar sizes. Scaling is done by the bounding box, so if the samples are not in similar orientations, scaling may not be ideal. You can also try rotating the sample in similar orientations, and then keep scaling.

  3. If you are on windows, use the Bayesian CPD, which is very slightly less accurate then default CPD we included, but 10 times faster. When you are running parameter sweeps like this, one usually wants to obtain results faster. You can find the bayesian CPD here: GitHub - ohirose/bcpd: Bayesian Coherent Point Drift (BCPD/BCPD++/GBCPD/GBCPD++). Just clone (or download) the repository, check the acceleration option in advanced settings, and point out to the location of the bcpd.exe in the cloned repository location.

Dear Murat,

Thank you so much for your help.
I have tested various parameters and using the landmarks for checking RMSE is very useful.

I do use Windows and the Bayesian CPD was very useful.

Regarding scaling, the scapulae in this case are different sizes but oriented similarly, just translated. I also tested with different orientations and keeping the scaling seemed to work ok for all cases in terms of scaling and aligning fairly well for the rigid alignment.

After several other tests I could get better alignment by tweaking the rigid deformation parameters, and lower alpha and beta give sometimes give better alignment but sometimes higher values are better (depending on the case)

However, regardless of alpha and beta values, the rigidly transformed source mesh and the warped source mesh look like they are mostly translated from each other, and there is very little actually warping/bending going on.

(target yellow transparent, rigidly deformed in red, warped in purple)

Another case with different meshes: the rigidly deformed model fits better to the source mesh than the warped mesh does

(target yellow transparent, rigidly deformed in red, warped in green).

Do you know what might be causing this, and how I can get a better fit?
I am already at alpha and beta .1 for case 1 (in case 2, higher alpha and betas seemed better for some reason)
I am not sure why the warping is not doing much. Are the meshes too different?

If there is no solution, would it be possible to add the LDDMM framework to the ALPACA extension?

A colleague uses the Deformetrica (http://www.deformetrica.org/) implementation of LDDMM with great success but unfortunately it does not support Windows, and ideally I would like to have my entire workflow in Slicer.
I think it should be possible to add to Slicer since Deformetrica is written in Python, but I am not sure how time intensive it would be.

Ideally of course I could get the current Alpaca method to work better for the morphing.

Thank you so much,
Eva

ALPACA works on point clouds, whereas LDDMM methods like deformetrica works on meshes. I haven’t checked them lately, but it tended to be rather slow. So they are not entirely comparable. I am not sure how involved porting deformatrika to Slicer would be, they seem to have a pip package. But then integration still takes time.

If you can share a few samples where things are not working as you expected, we can take a look and see if we can improve things with alpaca.

@agporto @smrolfe

Thank you so much Murat!

Ah, thanks for clarifying that difference.

Should I email you the meshes?

Thanks again,
Eva

email is not the best way. Upload the files somewhere on the cloud and please provide the link here. If you cannot share publicly, you can DM the link (or email).

Can you tell me how many landmarks you are transferring? In the screenshot I can only see one. The deformed mesh is an approximation intended to help visualize the performance of the warped transform, using a TPS warping based on the source and transferred landmarks. If you are using a very small number of landmarks this will not provide a good approximation and it is best to rely on the RMSE between landmarks.

Hi Eva,
So cool that you are trying ALPACA on your dataset. Since watching your talk at the FunkyMUG, I thought there was so much potential for an integration between your work and Slicer/SlicerMorph. I want to emphasize Sara’s answer, because I think it is an important aspect that it is not immediately clear from the output. Basically, in order to save the user’s time (since deformations of entire meshes can take quite some time), ALPACA takes a visualization shortcut. It uses the landmark points to calculate a tps transform and applies that transform to the entire mesh. This has one important consequence for its usage. If you want the visual aspect of the deformation to be an accurate description, you need to sample landmarks throughout the structure. For example, you can use the pseudolandmark generator to generate an evenly spaced set of semilandmarks to use during the ALPACA transfer. However, the fact that the visualization doesn’t work with few landmarks does not mean that the deformation was not accurate. I expect given the similarities between your structures that the deformation is quite accurate (what RMSE values are you getting?). You are just not seeing it because of the lack of landmarks being used to estimate the transform. If it is really important for you to get a complete deformation of the mesh, it is somewhat straightforward to modify ALPACA to do it. I am happy to chat about how this can be accomplished using the machinery of the module. However, I don’t think this is something we would want to use as an ALPACA default, since most users just want to quickly get the landmark positions (which was what the module was built for).

1 Like

Hi all,

Thank you so much for clarifying!

So essentially the TPS transform created from the landmarks is for visualization purposes only, but the landmark transfer should be assessed via the RMSE?

I definitely misunderstood this method, I thought the whole mesh deformation was part of the process to transfer the landmarks.

@agporto That’s so nice to hear that you saw my FunkyMUG talk. I switched fields now to work on clinical shoulder biomechanics but still very much interested in optimizing and automating workflows for FE model generation. I just got started with Slicer and am blown away by all of the useful plugins I have found so far.
My current goal is to transfer muscle attachment points from a muskuloskeletal model to other scapulae. Since the muscle attachment points are a bit subjective to place I plan to first use clearer anatomical landmarks to calculate the RMSE and find the best parameters for SlicerMorph, then once I identify these, I want to use them to transfer the muscle attachment points.

If it is really important for you to get a complete deformation of the mesh, it is somewhat straightforward to modify ALPACA to do it. I am happy to chat about how this can be accomplished using the machinery of the module.
Thank you very much! Do you think this could improve the accuracy of the landmark transfer? (see below).
It could also be very useful for another, more complex case I will have in the future -in our hospital CT scans, the distal humerus is missing, so ideally I would warp the generic model humerus mesh to the partial patient specific humerus (which would require whole mesh warping)

I have now tested the scapular morphing with more landmarks:

Some of the landmarks transferred quite well (the ones I placed are in green, the ones that are transferred from the source are in pink).

However, some do not transfer well:

image

image
This was with alpha and beta at .2 and CPD iterations at 1000.

Alpha and beta at 2 and CPD iterations at 1000 were a bit better (RMSE 0.007362 vs 0.008228), and closer matches, but still failing to identify certain landmarks such as the scapular notch. Note units are in meters - for reference, scapular blade height is about 15 cm).
image

and also showing a noticeable amount of variation near the scapular spine (an area that is important for muscle attachment points):

So I guess I should play around with increasing the alpha and beta values, and I will also test the different rigid transform options again tomorrow to see if I can get a better fit.

A quick note:
image
For this landmark, I think part of it is actually subjectivity in manually finding analogous points (in the source mesh, there are two angular points on the distal scapula, so there could be user disagreement where to place the point). If I had placed the source landmark here instead, that would probably give a better fit. I am just noting it here since I am sharing the files and want to note that maybe that landmark is better left out in future analyses because of lack of clarity in the manual reference landmark placement.

image

I also uploaded the files, they are available here:
https://drive.google.com/drive/folders/1btfP33YO2Zd7aWBHRFySmuhhYCNGK0FZ?usp=share_link

I am not sure if I can make it to the office hours tomorrow but I will try to.

Thanks again for your help,
Eva

Yes, ALPACA is for transferring landmarks, and the deformation is calculated using reduced point clouds since it is very fast and offers enough accuracy. We will clarify the documentation to explain that visualization is only an approximation and limited to the points selected.

We had been talking about having a separate mesh registration framework build on point clouds, but not sure what the timing of it would be. If @agporto and you are interested in workign together, we can schedule a meeting.

Thank you very much Murat!

We had been talking about having a separate mesh registration framework build on point clouds, but not sure what the timing of it would be. If @agporto and you are interested in working together, we can schedule a meeting.
That would be fantastic. I will hopefully be recruiting some students in a few months, so maybe a student could also work with us on this.
Regarding a meeting, should I email or message you?

Thanks again,
Eva