DynamicModeler Transform Maker

I already started implementing.

The idea is a transformComposer that would get as input any number of: oneFiducialLists, angles, planes and linearTransforms. It would concatenate the transform for all of them.
And output some or all of a point, an angle, a plane or a linear transform according to the concatenated transformations corresponding to the inputs.

This will highly improve the workflow of scientists that have to measure or make transforms according to some anatomical axis. It would also ease on the creation of surgical guides with the relative difference that transforms will be highly accesible in contrast with Blender

Please let me know your thoughts

Here is the current working implementation:

(don’t pay attention to the branch name)

For example you can concatenate angles and create a bigger angle with live feedback:
dynamicModelerTransformMaker

I think it’s a quite powerful tool, you can compose frames also. And the interaction of markups is still there if you want to use it.

I tested a deformity correction surgical planning workflow.

Basically the surgery is filling a wedge after a planar cut, or removing a wedge of bone and joining the bone faces.

correctedAndDefect1
correctedAndDefect2

The only problem I found out to take care of all the planning using the Dynamic modeler tools is the unexpected result that output transforms are considered (inside the current tool and the inverse is applied to the polydata) while pipelining tools.
I consider this to be unexpected behavior and it’s preventing me from achieving virtual surgical planning inside Slicer.
Core devs please give your opinion for me at least this behavior should be optional by adding a boolean flag to Dynamic Modelers tools.
Here are the lines I’m referring to:

@lassoan, @jcfr, @Sam_Horvath, @pieper, @RafaelPalomar

Ignoring any transforms is generally not desirable and most users would consider it a bug.

However, I see that for your specific workflow it would be convenient if you could transform the output model without impacting the modeling result. The problem is that simply ignoring all output transforms would prevent computing the output model in the desired coordinate system, so it would only be useful for very few special cases.

What you actually need is the ability to specify an additional transform for the modeling tool’s output. This could be achieved in a much simpler, more elegant way, with a more general solution without complicating any existing tools: by simply adding a new Transform tool. This new tool would use a model and a transform node as inputs and on its output it would create a transformed copy of the input model. This new tool could be used for many other things, such as cloning a model, creating patterns, …

Yes, maybe but it’s a flag. The flag is there because I think users would like to do recursive use of this tool. Let say this tool is called T and each T use is called T_i and there are In_1_i till In_1_n inputs and one output called Out_i.
The ignore flag implementation is useful when you want that use T_i+1 receives as input the output of T_i.

The argument was pretty but I found it doesn’t justifies the ignore parentTransforms flag so we may remove it.

The problem is that simply ignoring all output transforms would prevent computing the output model in the desired coordinate system

Please consider that in the school of thought I was trained on, an algorithm can be considered a system. Let’s say the PlaneCut tool is a system, it receives some inputs and it produces an output, then on the systems chain of the same pipeline there is a OutputTransformNode, let’s consider it a system too, it transforms polydata at it’s input.
In the current implementation there is a feedback loop that assures the output polydata is the same with any OutputTransformsNode’s matrix. In other words, the output of PlaneCut is not transformable in fact.
In my proposed change, the PlaneCut tool output would be transformable.
Between the two behaviours and considering users would like to design things with the Dynamic Modeler I think we should do the second.
My argument is that having a pipeline without feedback loops is easier to analyse and more desirable that one that has them. So default behaviour should be to not do the feedback loop.

so it would only be useful for very few special cases.

What are the most common use cases? Why do they need the feedback loop?

I believe Blender is used more for surgical planning and creation of surgical guides in MSK surgeries than Slicer but with some improvements we could gather all those users with the benefit that in Slicer surgical planning is easier to validate and surgcal guides creation is repeatable because transforms to the model components are accessible

What do you think about the TransformMaker? Have you tested it yet?

There are some math bugs I’m correcting

Everything happens in the correct physical location, wherever the applied transforms placed the input or output nodes.

Yes, I see that you need an option to transform a node to a different coordinate system after the processing is completed; and I think this would be a useful feature.

This could be implemented in each tool, for each input and output node. But this would mean that we would need to have transformation selection options (e.g., choose between local, world, or a custom transform node) everywhere. This would complicate the implementation and GUI of all input and output nodes selection in all tools.

A similar approach was chosen with CLI modules: all applied transforms are ignored and you need to specify transforms that will be applied to input/output nodes. It did not work out well. Users don’t expect that applied transforms are ignored and it is not obvious what transform selectors are used for what nodes. You could make things a bit more intuitive with better GUI. For example, transform selectors could be placed next to node selectors they apply to. But it would not always be optimal, because sometimes you want to apply the same transform to multiple nodes.

The solution I recommend instead is much simpler to implement. Does not increase complexity in any of the tools, as it is a separate transform tool. The only disadvantage compared to what you propose (i.e., built-in transformation feature in every tool) is that end-users need to add one more tool if they want to have transformed input/output node. However, this could be addressed by improving the GUI, making it easier to add/configure tools.

This GUI improvement is in our mid/long-term plans: we plan to have a Model Editor, which will use the Dynamic modeler as processing engine for editing models similarly to the Segment Editor can edit segmentations. We could make model editing to be immediate (as in Segment Editor, MeshMixer, and in Blender sculpting tools), or parametric (as parametric CAD modeling tools and Blender modifiers).

There are no feedback loops of any kind, simply everything happens in the world coordinate system. All we do is to take into account applied transforms the same way as if they were hardened.

I would think magnitudes more people use Slicer for surgical planning than Blender, just because Blender is so extremely complicated. But we don’t have data, so there is no way to tell. I agree that those very few users (maybe a few groups in the whole world) may switch to Slicer if we have better tools.

However, majority of people use much simpler, single-purpose commercial tools; and the few percentage of clinicians who use research software prefer simple tools (such as MeshMixer), and only a tiny fraction of advanced clinical research users may choose Mimics or CAD tools. Slicer could compete with all these tools, but probably we could make the biggest impact by offering single-purpose tools (such as your BoneReconstructionPlanner extension, to offer alternative to single-purpose commercial software) and simple editing tools (to compete with MeshMixer and some of the simpler Materialize tools).

Please remind me what it is and where I can find it. The name reminds me of the TransformProcessor module in SlicerIGT. Does it have a similar purpose?

Hi Andras.

I’ll take time to give a full answer on the afternoon but here you have a preview video:

And here is the branch, thanks for testing:

Please don’t be confused by the name of the branch, the idea of the transform maker came while I was developing the AddGeometries tool so it’s on this branch

Best regards,
Mauro

Thank you for the information. It seems that TransformMaker indeed does the same kind of processing (combining transforms in various ways to create new transforms) as TransformProcessor.

There are some advantages of adding new derived transform computations to the existing TransformProcessor module (it is a small, simple module, with a good name, with a well-defined scope, new modes can be added with little amount of code in a few existing classes; it is easier to modify the module because it is in an extension and not bundled with the Slicer core), but there are also some disadvantages (it is harder to find the module because it is in an extension; it does not use pluggable infrastructure, so if a new transformation mode is added then the changes are dispersed across a couple of files). But main main worries are:

  1. Having the same kind of features available in two completely different places means more code maintenance, documentation, user education workload.
  2. Widening of Dynamic Modeler module’s scope from model editing to all kinds of processing would mean that we would effectively introduce a new module type. If we find that existing module types are too complicated/have too much overhead then we should address that issue instead of just adding yet another type.

Lately I’m replicating so much functionalities that already exist on Slicer… xD

Talking seriously, I think the ability to output markups according to the final transform appears very useful