I have been merging .fcsv files from online sources and my own data. But when I try to analyze the files altogether in GPA, I get an error like: "Error: load file failed: There are 459 landmarks instead of 431. How does GPA decide which number of landmarks to use and how can I generate the correct number in the PseudoLMgenerator?
It is not clear to me what you are trying to do. For GPA you do not need to merge files or anything. Simply provide all fcsv (or jsons) in one folder, or select them via the dialog box. However, for GPA module to work all included files need to have exactly the same number of landmarks.
The error you included indicates that your files have different number of landmarks (first one had 459, and the second 431).
There is no “correct” number of LMs that PseudoLMGenerator generates. It will generate hundreds to thousands LM based on the geometry of the sample you chose as reference, and the sparsity you requested. You can read the tutorial here: Tutorials/PseudoLMGenerator at main · SlicerMorph/Tutorials · GitHub
Are you trying to use the PseudoLMGenerator with more than one 3D model? If you do that you will get different sets of LMs, because each model have a different geometry. PseudoLMGenerator should be used only on reference specimen, and to landmark rest of your models, you can use the ALPACA module, which will transfer those LMs to based on their geometry.
Please explain what you are trying to do a bit more clearly.
Thank you very much for this quick and informative response. Indeed I was inadvertently trying to merge .fcsv files generated from two different models. I see now that this is a mistake and will go back and use only one model.
I am still curious how Slicer decides what is the correct number of landmarks when doing GPA when there is a mixture of different row numbers. Is it the number that is first encountered in the fcsvs? I see now that it is important to save the one model for any analysis, so that it can be applied to different targets in different sessions.
Yes, we obtain the number of landmark from the first file read and assume every other file has identical number of lms (as they should, since it is a requirement of Procrustes analysis). İf they don’t, the error you encounter is generated.