ALPACA module alignment help - shapes too different?


I’m working some some sclerites that lack obvious landmarks that lend themselves to conventional landmarking and want to attempt to use the ALPACA module. I have surface meshes (.ply), with segmentations performed in a different program. I’ve used PseudoLMGenerator on my template specimen, using the ellipse geometry and the midline/sagittal plane. I’m not as concerned with the number of landmarks for now - I just want it working before making those decisions. I will note that, as it is bilaterally symmetrical, I am following a protocol for other sclerites where I perform a GPA on all points then use only half for downstream analyses to avoid midline artifacts. In the image, we are looking at the right side of the structure.

I exported the SymmetricPseudoLandmarks file as a fiducial csv to be used in ALPACA. I have chosen a target mesh for a single alignment that is not too different from the template (but some differ considerably).


I see from the Porto et al preprint that the two meshes do not have to be oriented in the same way. In the tutorial, it notes the ‘Select Subsampling Voxel Size’ is important but I don’t have it on my interface.

Anyway, I perform the subsampling and alignment, and the result is below. The two structures do not seem aligned to me. I also selected another target, with similar results (third image).


Am I missing some step, or need to change some options? Here are my questions to try to make this work:

1 - Do I need to manually change the registration for each sclerite such that they are oriented with anterior being anterior, dorsal being dorsal, etc for this to work? And if so, how do I do that?

2 - If that is not the case, do options do you recommend toggling in the ALPACA advanced parameter settings?

3 - Something else I’m not thinking of?

Thank you!

@bobkallal in general if geometries are too different then most automated alignment methods will not perform well. We don’t see the full picture and hard to tell the shapes, but snapshot 1 and 2 seems like a reasonable alignment from that angle.

It is correct that starting positions shouldn’t make difference in ALPACA, but we also didn’t test it with such different shapes. If you are worried that their orientation is making an impact in the alignment, you can manually transform one of the meshes to a closer orientation either by Transforms module or perhaps identifying 3-4 points on both meshes, and use that IGT Fiducial Registration (rigid option). After hardening the transform you can give another try with ALPACA with this new set.

@agporto can you comment on this?

@bobkallal First, just to answer your questions:

  1. There is no need to perform manual alignment for ALPACA. It should handle differences in orientation.

  2. I would increase the point density (using advanced settings). Right now, you are sampling each mesh using around 1,500 points. I would aim to somewhere between 4,000 and 5,000 points.

As @muratmaga mentioned, I think the first alignment doesn’t look too bad. Increasing the point density will probably get you there. The last example, on the other hand, just seems to be a consequence that the two shapes are too different. Visually ,I can’t even tell how they should be aligned.

When you have shapes that are too dissimilar, one option would be to use more than one template. Since you are using pseudolandmarks, I am not sure whether that would be possible. But if it is, it might be worth considering.

Thank you both.

@muratmaga I will check out the Transforms module again. I was poking around this module over the weekend and I think for what I want - please correct me if I’m wrong - I want the ‘create new transform’ function in active transform? Specifically, if I want my surface to be oriented as in the first image such that the rightmost cylindrical part is anterior, the top is dorsal/superior, etc, would that be the way to go about it?

@agporto It can still ‘figure out’ differences when the shapes are so different but with different initial alignments? I see point density adjustment is 1.5 as default and goes up to 3. Am I looking at the wrong thing? I must me, as when I max it out to 3, the two pointclouds don’t even overlap, though they do seem more or less in a closer orientation. What do you advise to increase to 5,000 points?

Thank you both again!

@bobkallal Something is odd. You increased the point density, but the output box under ‘Run subsampling’ is still showing the same number of points. That should not happen. I suggest restarting Slicer. If the behavior continues, would you be willing to share the meshes with me? I can investigate it further to see what might be causing. But please try restarting Slicer first.

And just to be clear, when advanced parameters are changed, you have to rerun the entire pipeline again (including run subsampling)

Thank you. When I run at 1 for point density adjustment, I get the below image. Increasing from 1 to 3 only gains 200-300 points. How do you recommend I get to 5000 points? If you’d like me to share or have another idea, please let me know!

@bobkallal Very odd. Increasing the point density to 3 should increase the number of points to much more than 5,000. If you are willing to share two example meshes, you can contact me at [edited] . I can investigate it further and see if I can reason what might be happening.

FYI, you don’t need to disclose your email address publicly here but people can send you private messages via the forum (by clicking on your name then on the “Message” button).

1 Like

@bobkallal One thing that occurred to me is that your meshes might only have 1,800 vertices total, in which case you won’t be sample more than that.

That is good to know. Thanks, Andras!

Thank you. That might also be the case as these are very small structures that have been simplified. Still, I will send you the information and see what you think. I really appreciate any insight you may have!

Hi everyone - thank you all again and especially Arthur. It seems like the surfaces are pretty different and ALPACA might not be the best tool, so I’m looking to auto3Dgm now. I definitely used it over the summer workshop session. I have a folder with the 5 example .ply surfaces and an output directory designated, but nothing happens when I try to load the data. I know the mosek license is required, and that should still be in the same (right?) place. What am I missing? Thank you for your continued help and patience.

@bobkallal you do need to acknowledge that none of these method work well if structures are very different.

As for the auto3Dgm, nothing has changed since the summer workshop. Probably the best thing for troubleshoot is to see if the summer example works, and if it does see where the issue arises with your data.

In the summer workshop, we followed the instructions provided by auto3Dgm team. How to use - Slicer Auto3dgm. There is also a video tutorial

Thank you, Murat. I acknowledge that different shapes are a challenge indeed.

As you suggested, I was trying to do the 5 example ply surfaces following the linked tutorial prior to trying my own data (as screenshot below). Those 5 example surfaces from the summer example are the ones not loading. I’ll review the tutorial video to see if that sheds any light on why it is not loading the surfaces.

can you open the python terminal and type
import mosek
and see if it turns an error.
Also can you post an error log (CTRL +0) after you completed these steps, perhaps there is an error/warning that might be helpful. I don’t think auto3Dgm team actively monitors the forum, so I will get in touch with them offline.

FYI, I ran auto3Dgm successful with the latest stable, once I installed the mosek license. This was on windows.

If you have trouble with matching very different shapes, you might try SegmentRegistration extension, which computes a warping transform between distance map of the two shapes. The module requires segmentation as input, so you need to right-click on the model node in Data modue to convert it to segmentation node. This transform provides full spatial mapping between the two shapes, so you can use it to transform landmarks from one model to the other. It is a pairwise registration only, but it may be used to register all shapes to some template or maybe combined with other groupwise methods.

@muratmaga Do you know groupwise shape registration and analysis tools in SlicerSALT extension? Could they be useful here?

Hi - I think I found out what it was and - of course - it was something wrong on my end. When I tried ALPACA last week, I had to change Slicer from version 4.11.0 that I used over the summer to 4.11.2 because 4.11.0 didn’t have the PseudoLMGenerator module as an option. It seems to be working now. Thank you.

Thank you, I will see about the SegmentRegistration extension if auto3Dgm has trouble. I do have only ply for my files, but if you can convert then I’ll give it a go.

I wonder, though, given the disparity of the structures, is it even wise to continue to try to do morphometric analyses of them? Would I be inherently inviting issues and criticisms?

This is something you as an expert have to convince yourself (and potentially your reviewers). Keep in mind most automated analyses are designed in context of single species. For example in clinical imaging, corpus callosum is a corpus callosum, however deformed it might be due to genetic or environmental factors. As long as algorithms align those structures in reasonable orientations, there is a biological basis of that comparison.

Things get complicated in multi-species context, particularly when someone uses a structure like ‘skull’ which is actually derived from many independent bones and growth centers. Continuing this example, if you align a point cloud of pseudolandmarks derived from a mouse skull to a cat skull (e.g., using ALPACA, or with SegmentRegistration), you will get a result. Because there are no inherent constrains on this warping, then some points that were originally on the nasal bone in the template (because mice have elongated nasals compared to cats), may end up on the frontal in the cat skull. While this is a naive example, it tries to highlight using fully automated shape analyses in multi-species, evolutionary context. We develop and continue to use ALPACA for population level problems in model organisms or in single species. I haven’t used SlicerSALT for a while, but it used to have the same kind of issue.

At this point if you, as an expert, cannot identify corresponding structures in your samples (however few they might be) visually or from developmental biology or literature, I wouldn’t expect the automated procedures to do a very good job. And particularly for shape analysis as you wouldn’t have any means to evaluate whether that results make sense or not.

1 Like