I am using SPHARM-PDM to generate shapes of subcortical structures. Using the output files “SubjectName_pp_surf_SPHARM_procalign.vtk”, I produced a mean vtk file with “Shape Variation Analyzer” module.
My next step is to find the signed shape difference for all of my subjects from the mean shape. Once I have the distance, I would like to get a csv/txt file that has the signed distance values for my 1002 vertices. With that csv/txt file, I will carry out my statistical analysis separately (I have this statistical analysis tool). Could you guide me to the right direction please?
“old” way: Use a helper module/tool called MeshMath (I am sure there are other options as well). It is distributed with SlicerSALT (in the bin folder), but you would need to call it via terminal access.
compute difference as a vector field: MeshMath meanInput.vtk outputdiffvector.kwm.txt -subtract subjectInput.vtk
2a. compute the signed magnitude of the difference vectors: MeshMath meanInput.vtk outputSignedMagnitude.kwm.txt -magdir outputdiffvector.kwm.txt
2b. alternatively, also map the difference vector to the normal of the mean surface (removes component that runs along the surface, rather than orthogonal to it): MeshMath meanInput.vtk outputNormSignedMagnitude.kwm.txt -magNormDir outputdiffvector.kwm.txt
New way: use Model to Model distance SlicerSalt module : source model => mean surface, target model => subject surface, distance => corresponding point to point, save target name in distance filed => check, vtk-output => the source model with the difference added as a scalar field.
this will give you a signed magnitude (please double check), but will not provide an option to map the magnitude to the surface normal
I am using the “old method” as I can automate this by writing a command line script for my 400 subjects. I have a couple of follow-up questions.
i) It looks like MeshMath shows error with my SPHARM vtk files (e.g. “subjectID_pp_surf_SPHARM_procalign.vtk”) when I run the command “MeshMath input_mean.vtk subjectID_outputdiffvector.kwm.txt -subtract subjectID_pp_surf_SPHARM_procalign.vtk”.
The error message is:
“Incomplete file record definition
NDims required and not defined.
MetaObject: Read: MET_Read Failed
MetaMesh: M_Read: Error parsing file”
However, it works just fine if I first convert the files from vtk to meta using “VTK2Meta” tool that comes with spharm-pdm and then run the command with those meta files. Please let me know if this is okay.
ii) When I run what you suggested in 2b (as I am interested in the distance values along the surface normal), it generates two text files: “subjectID_outputNormSignedMagnitude.kwm_centered.txt” and “subjectID_outputNormSignedMagnitude.kwm.txt”. Could you let me know which is what and which one should I be using for my case, please?
Re 1) forgot about that. You are correct, the files need to be in meta format (there is no loss of information by going from vtk to meta for the meshes)
Re 2) centered subtracts the mean across the surface, so rather than the absolute value (of the signed magnitude along the surface normal) this will return values relative to the mean. Generally the centered values are not needed (I cannot remember why we added that option), so you need the outputNormSignedMagnitude.kwm.txt files.
Thank you Martin for taking the time in answering all the questions. I wanted to ask you two questions.
For my subjects, if I am using gaussian filtering of 0.75 in 3 axes in the advanced post processed segmentation tab of SlicerSALT, and at the same time giving a Reg Temp and a Flip Temp as my input (i.e. running SPHARM-PDM on my template at the very beginning), do I need to apply the same gaussian filtering of 0.75 in 3 axes on the template while running SPHARM-PDM on the template and use that version as my Reg Temp and Flip Temp (instead of the version without gaussian filter)?
Does gaussian filtering of 0.75 in 3 axes sound reasonable enough not to cause any errors in my distance calculation and subsequent statistical analysis?
If possible, do not use the gaussian smoothing, as it can remove parts. On the other hand, if you have medium sized holes in your data, then gaussian smoothing can fill these in. We usually process our shape data without the Gaussian smoothing, unless the segmentation have consistently holed or handles that need filling via an automated approach, such as the Gaussian smoothing.
Thank you! I would also like to ask you about something else. Is there a way to find out the node to node distance (within one subject)? Basically, I would like to know about the distance from one node to another so that I can make a better choice about the variance of gaussian smoothing I am going to use to incorporate all my subjects in a generalized fashion.
I apologize going over the same issue again. Actually, I have got meaningful results for my subjects without using any smoothing but because I want to include most of my subjects, I need to use this smoothing.
Thank you for your support throughout the process! Much appreciated!