Let say, I want to apply a series of effects in segment editor to multiple datasets. Is there a way to record a ‘macro’ or build a script by interactively doing the steps in editor one time, and using that as a script? If not, this would be a good feature to introduce us, (aka non-programmer types) to python scripting in Slicer.
That sounds like a neat idea - I’m not sure how it exactly it would work in practice since each dataset may require different parameters or interactive input at various steps.
Do you know of any other software that provides a feature like this?
set up processing (make sure “Save changes” is enabled for the output node in Sequence Browser module), set up processing parameters
apply processing to current frame (typically by a mouse click)
go to next frame (click Ctrl + Shift + right-arrow)
repeat the last two steps (mouse click then Ctrl + Shift + right arrow) until all frames are processed
Up to maybe 10 frames and short processing time, manual workflow may be acceptable, but clearly not ideal.
Add custom modules for processing that is commonly needed for sequences. One example is Sequence Registration module.
Although this requires only very small amount of software development and allows simple, optimized user interface, it would not be feasible to use this approach for all available operations in Slicer.
What could be done?
Add sequence processor for operations implemented in CLI modules
To automate processing, we could add Sequence processing module, where you could select a CLI module parameter node and a sequence browser node. The module could then use the sequence browser node to iterate through a sequence of nodes and process each of them using the chosen CLI module. This would probably work well, but it could only be used for CLI modules.
Record macro for one frame and apply it to all frame as recommended by @muratmaga. This method could complement method 1 (which is only applicable to CLI modules). There is already macro recording and replay capability in Slicer (enable QtTesting in Application settings and then Record macro / Play macro options will appear in Edit menu. It would not be too difficult to create a module, which would apply a recorded macro to each frame in a sequence. Since so far Macro recorder has been mainly used by developers only, for generating test scripts, it may need some work to make it robust and polished enough for regular users. I’ve been also thinking about slightly changing the macro recorder to be able to create Python scripts as well, which would be easier to edit by developers or users with minimal programming knowledge.
If anybody would like to explore these options I would be happy to help.
@pieper
Yes, I agree at some point usually segmentation require user input/intervention and unlikely to be fully automated. One of my use cases is parameter sweeps on large volumes (1024x1024x1024 or bigger), which are typically simple, but time consuming tasks (e.g., running connected components with a number of different threshold values).
But more importantly, making the move from interactive to scripted tasks is a big time investment on users end, especially if one is not already familiar with python. I thought guiding them with some sort of a editable template customized to their needs would be a good way to entice them to make the investment.
Maybe we can come up with some simple example use cases and see what a solution would look like. I realize writing code can be a big step for some people, but it can be really efficient and flexible for things that would be really hard to express in a GUI (just as the opposite can sometimes be true).
I’m thinking we could have a much more entry-level python scripting tutorial with some handy recipes for automating common operations. Perhaps using the new Jupyter infrastructure.
This is already done in other software e.g. ImageJ and it would be very useful to have in Slicer for automating simple tasks and also answering a lot of future python scripting questions.
I am not sure what do you mean by examples, in Slicer or in ImageJ?
I used imageJ years ago and I don’t work with it anymore but here is a youtube video explains the use in FIJI (a variant of imageJ).
A Slicer example: a user did some operations manually and wanted to have them in a python script to apply the same thing in a future or to to put this in a loop to process many images. If Slicer provides macros recording, the user can get the python code directly from the recorded macro. It is helpful, especially for beginners users in Slicer.
It would be certainly useful if we could generate Python code directly from observing events.
There are things that would be very easy to automate, such as CLI execution (basically what is shown in the ImageJ youtube video), but most interactive features would be hard to capture at a meaningful level.
Probably recording MRML node changes would capture most of the interactions, but that would require some infrastructure development in MRML, which would allow observing node property changes. We’ve already started some work along these lines: node property macros are used at many places for node property read/write/print/copy; but there are no commonly used macros for get/set of node properties yet.
An idea is to provide a logical variable for each interactive user event to print an equivalent script line(s) e.g. in the python interactor. When a user clicks on a “record macro” or “stop recording” button this variable change its value.
Probably this needs a lot of work but the macros recording a good feature to have in the future.
Thanks for the info. I just checked it. Personally, I found the imageJ way is much simpler and easier to use for the purposes I mentioned. Here are some examples (not sure if it is already possible with Slicer macro):
Getting a python code e.g. I loaded a model by drag and drop but I could not get a code that I can use in a python script to do the same thing.
Using the same macro with a new input model.
Using the same macro in a loop to process a number of input models.
Yeah, the Qt Testing recorder isn’t ideal, but maybe it’s a part of solution.
Also for reference Blender has a some python script recording options too. I also remember seeing that blender included the python commands in the tooltips.