The landmark registration tutorial that you have linked is good. We have completed it with dozens of students in our bootcamps over the years without any issues. If you have any specific questions or unclear steps then let us know.
You would normally not record a video if you just want to register a model’s coordinate system to the attached tracking marker’s coordinate system. The video is just added to the tutorial so that you can complete a registration workflow without having access to real hardware. If you load the mrb file that comes with the tutorial then you should will see the video (you can show the “Sequence browser” toolbar to see play/pause button for replaying the sequence).
I just want to get a simple and easy UI for registration, maybe a result transform is ok.
Let uers choose three markup points pair, one from the em tracker position（physical）, and one from the cg model（slicer）, after this we may compute the transform.
Using the transform above（maybe more then one transform, we just need the result）, the real time em position to the cg model will be ok.
Is this right ?
Of course in reality there are small things like you first need to calibrate your stylus (so that you collect points at the tip of the stylus and not at the center of the marker), but that has to be done only once. You only need to think a bit when you set up your transform tree (what transform is applied to what node), but again that’s something that you need to understand only once and can be done by a couple of clicks in the GUI (and then you can save it in the scene). Once you have your complete workflow, you can further automate everything using Python scripting, so users only need the minimum number of clicks. This last layer of automation is not done in Slicer core or general-purpose extensions, such as SlicerIGT, because each workflow is slightly different (how many tools you have, how you attach trackers on them, what coordinate system you choose as the renderer’s world coordinate system, if you use imaging, surface scanning, etc.).
Things are not quite as simple as this. Even in the simplest scenario you have at least these coordinate systems:
Tracker (tracker’s world coordinate system)
Reference (coordinate system of the sensor that is attached to the patient)
Stylus (coordinate system of the sensor that is attached to the stylus)
StylusTip (coordinate system aligned to the tip of the pointer)
Model (coordinate system where points of the “cg” model are specified)
Your goal is to compute ModelToReference, so you need to compute the transform from “Model” to “Reference”, therefore the “From” point coordinates must be specified in the Model coordinate system (you can get them by simply placing markup points on the model) and the “To” point coordinates must be specified in the Reference coordinate system (you can get them by sampling the StylusTipToReference transform).
I would recommend to make a drawing of all coordinate systems and transforms that you compute between them. SlicerIGT and PerkLab bootcamp tutorials should help you to understand the process.
Here I don’t know how to get the coordinate positions through the gui, though the transform can be seen from the Transform module, but there is no “record the current” transform button(that is just imagination).
so, just one step left…
In the left view, LV simulator is the model;Locator_probetotracker is the “needle” model that can move freely ; the LinearTransform is added mannuly intened to receive the registration transform.
After using the Fiducial registration wizard module, “From” is the markup in the LV simulator; “To” using the transform as [quote=“lassoan, post:11, topic:24943”]“Place fiducials using transforms” section[/quote]
The result as below: