How to save and analyze transform node sequence?

I’m recording the motion of a tracked tool using the sequences module. I am able to record the transform node as a sequence and play it back. How can I export this sequence as a series of timestamped matrices (such as .mha or even just a txt file format) so that I can open and analyze it externally (w/ python or matlab for example)? This is a very basic question but I can’t seem to find the correct doumentation anywhere.

Thanks!

You can compute many metrics using PerkTutor extension and you can also run any custom analysis in Python, either using the interactive console or using Slicer as a Jupyter notebook kernel (using SlicerJupyter extension).

As far as I remember, you can also save export a linear transform sequence as an .mha file (@Sunderlandkyl @Ungi do you know where?) and of course to can iterate through the sequence and write it to file in any format you prefer.

You should just be able to save your transform sequence as an mha using the regular save dialog (ctrl+s).

Under the file format for the sequence node it should read “Linear transform sequence (.seq.mha)”.

1 Like

Yes, the Perk Evaluator module of the Perk Tutor extension is designed for exactly these purposes. There are already many metrics included as part of the module, and you can download more metrics directly from the Perk Evaluator module using the “Advanced > Options” tab. See here for a list of available metrics: https://github.com/PerkTutor/PythonMetrics.

But most importantly, it allows you to write your metrics and do your own analysis in Python. Please see here for details: https://github.com/PerkTutor/PerkTutor/wiki/Tutorials:-Perk-Evaluator, https://github.com/PerkTutor/PerkTutor/wiki/User-Configurable-Metrics. Let us know if you have any questions or run into any difficulties.

2 Likes

Thanks all, Perk Tutor looks like exactly what I need.

The work/publications on the perk tutor page looks very relevant to my work. I’m curious if you are aware of anyone who has studied the effectiveness of different visualizations of images and tools (e.g. 2D or 3D visualizations, reslicing vs. static) for targeting? I am in the midst of planning a study where I vary the different views/angles shown to users and time how long it takes for them to line a tool trajectory with a target (a cylinder model), so it would be good to know if similar work has already been done (did not find any when searching google scholar, etc)