Averaging models (again and again)

Dear all,
There have been several posts about averaging models and some answers, including one marked as solutions. But I am still left unable to actually do this myself and forgive me, but I don’t feel it was very explicit how to do the averaging.
My guess is that people have been succesful at averaging several models or creating an average model based on slicermorph and using mark ups. Unfortunately, I don’t understand how. Could there be a demonstration or clear instructions on how to achieve this ?
Many thanks

If you run a GPA on your dataset, to get an average model all you have to do is to go to visualization tab, select the 3d model visualization and specify which what reference model and its corresponding set of landmarks to use, and hit apply.

Then you blue model that appears in your right viewer is your “average model” based on the landmarks and samples you included in the analysis.

If you want to export that, go to the data model, right click on the node that says PCA Warped Volume (blue model), and the choose export to file. There are bunch of things there, you only care for the PLY model.

However, if you want to preserve the scale of the original data, before proceeding with GPA, you need to enable the Boas coordinates option. Otherwise everything will be in unit dimensions.

Thanks Murat!
This is where I am confused. I’m obviously mistaking “reference model” and “averaged model”. I have a series of brains I want to average so I don’t have a single reference brain. Does this make sense ?

ps small personal note if I am allowed: We got some excellent results with nnUnet for automated segmentation :wink:

Thanks again

If you are going to do this with landmarks, you do have to have a reference model. That’s because that reference model gets warped to the calculated mean shape and becomes the average model. The denser your landmarks are, the closer the warping is.

If you do want to generate a new “average brain” from a set of segmented images, there are many pipelines (e.g., ANTs/Scripts/antsMultivariateTemplateConstruction2.sh at master · ANTsX/ANTs · GitHub). We will soon have that in SlicerANTs, but it is probably couple months out.

You can read more about template reconstruction with ANTs here: GitHub - ntustison/TemplateBuildingExample: Just when I thought you couldn’t possibly be any dumber, you go and do something like this… and totally redeem yourself!

Dear Murat, thank you again for taking the time explaining and my apologies for needing further clarifications:
If I have a series of “control” brains, I could use any of them as reference and they will be warped according to the calculated mean shape, essentially giving the same result each time I use a different “control” reference ?
Great news for the ANTs implementation, looking forward to this !
s

This in principle is true. However, in practice the reference will have bias (that’s a fact of all template based methods). This bias will be a factor of (1) how different is the reference with respect to the true mean; (2) how sparse these landmarks are.

(1) is usually not too much a concern in within population analysis provided and provided that for (2) you are using at least a couple dozen points.

In GPA output the samples closest to the mean (in landmark space) is provided. Our suggetion is to use that as the reference model.

1 Like