SlicerMorph GPA will not exclude landmarks

Hi all,

I have run an auto3DGM analysis and used Markup to identify landmarks in areas I am not interested in, thinking I can easily exclude those landmarks for downstream analysis in SlicerMorph’s GPA module. I created a comma, not space, delimited list of those landmarks, and the python viewing window indicates that the landmarks were excluded, the analysis.log of the GPA indicates that the landmarks were excluded, but the results in the GPA module still contain the supposedly excluded landmarks. What am I doing wrong here?

For the record I’m using the latest (non-stable) build from github, as is required by auto3DGM according to the installation instructions.

Thanks for any help you can provide.

How do you know that excluded landmarks are actually included in the GPA? How many landmarks did your auto3DGM analysis had? GPA reports 371 included landmarks for the analysis. Usually people specify some round numbers like 400-500 points in auto3Dgm.

Hi, there were 512 original points, and all the output files have 371 landmarks. The ‘mean shape’ point cloud displays landmarks that were removed, like the highlighted landmark, 135.

If you have started with 512 and the gpa reports 371 LMs, then it means 141 LMs were excluded in the analysis.

I am not seeing any landmark numbers more than 371 on the screenshot, which tells me that after the exclusion the landmarks were reorderer consecutively. Thus your the landmark 135 in GPA output is not the same landmark as the excluded 135.

Do you want to retain the original LM indices?

Oh, interesting, you’re right there are no landmarks labelled higher than 371! The thing that really stumps me, though, is that all the landmarks that I removed are on the surface you see (inside the braincase), so there should be NO visible landmarks in this view. I have confirmed multiple times that there are landmarks where I have specifically removed them.

What could be making it redistribute landmarks? Is it somehow related to the auto3DGM landmarking method?

Hard to tell without looking at the data. Can you provide 3-4 lmk files, a model and the list of lm you want to be excluded?

One issue I see is that the landmark index in the exclusion list should correspond to the index in the control points table. This index starts at 1, so 0 is an invalid. I will add error catching to alert the user of this. If you can provide some sample landmark files we can check if there is something else going on too

Interesting, I thought it would go by the name of the landmark. I will have to fix that on the next run. That may well be the issue! Currently waiting for it to finish another auto3DGM run that is taking quite a while. Here is a link for now of a couple of the .fcsv files, a .ply, and a list of landmarks to be removed.

Link to requested files

Thank you for your help!

That’s not possible, because there is no standard landmark naming convention. What if they are not numeric? Everything is based on the order of the landmarks.

I ran the samples provided, and I can confirm that GPA is definitely excluding the landmarks you have specified. But while doing that it also reorders them, so landmark names in the GPA output no longer matches the input. If this is critical for you, we might consider making that change. If so, please submit an feature request at Issues · SlicerMorph/SlicerMorph · GitHub

Meanwhile, the scale of your dataset seems totally off. The coordinates are tiny. You may run into precision issues with such small coordinates. Make sure you are using the output from the auto3Dgm under OSS (original subject space) folder.

Okay, I was wondering about how tiny those were! I had always used the aligned landmarks and meshes in the R implementation, but I will switch to the OSS landmark output folder.

Thank you so much for helping me solve these issues! I was beyond frustrated last week and these are very simple solutions (to my ignorance).



If you are going to use aligned meshes and landmarks, you really shouldn’t do a GPA. Those are considered already superimposed (AFAIK).

1 Like