How to do this type of work in 3d slicer using PLUS toolkit? Is there any step by step procedures?
Step-by-step instructions, sample data sets, etc. are all provided at SlicerIGT training page. Let us know here if you have trouble following any of the tutorials.
I have gone through all the tutorials. Tutorials has explanation for pre-recorded video. But specifically not for a real phantom or model as like in the above video link.
In case of real hardware you use OpenIGTLinkIF module to receive transforms and images from Plus toolkit, instead of replaying them using Sequences module. Everything else (calibration, visualization, etc.) are done exactly the same.
U-03 tutorial gives a short introduction to how to set up your hardware connections. Plus toolkit’s user manual gives you all the details.
If you have problems with setting up configuration file for data collection then post questions on Plus toolkit website. If you need help with tool calibration, registration, visualization then post your questions on this forum.
Thank you for your reply. I know actually how to connect through the hardware device like a webcam or NDI tracker. But, I need to know the next steps. Let’s say I am using a LEGO phantom. I have my real LEGO model. Now how can I do the navigation using my webcam, AR tags and my LEGO phantom. I would appreciate if you give me an insight
I had gone through the IGSTK tutorial. Actually, I was trying to do the same stuffs but using the PLUStoolkit not IGSTK.
You can do everything by using Slicer/SlicerIGT/Plus as you can do with IGSTK. An equivalent to IGSTK’s Lego navigation tutorial is available in PerkLab bootcamp Plus tutorial that explains how to set up tracking of 2D barcodes, followed by pivot calibration and landmark registration explained in SlicerIGT tutorials.