Webcam-based marker tracking is available in Slicer now

While 3D Slicer - using SlicerIGT/Plus - have been supporting real-time position tracking using high-end devices (optical trackers, electromagnetic trackers, surgical navigation system, etc), there has been no ultra-low-cost tracking solution available - until now.

With latest version of Plus toolkit you can connect any webcam, print your own markers, and start tracking objects - and show them in real-time in 3D Slicer.

Many thanks to Mark Asselin at PerkLab for adding this feature!

While the tracking is already reasonable accurate to be suitable for certain applications, we are implementing further improvements during the next couple of months.

See more details of how to set it up in comments of the YouTube video.

7 Likes

Can you please upload the Slicer scene somewhere so that it can be easily tried without having to create each object with CreateModels? It would make it much easier for others to start. Thanks!

Good idea. An example 3D Slicer scene file is available here.

It displays markers that are in this marker sheet (pdf file, ready for printing). You may need to zoom in-out 3D views after you place the markers in the field of view of your camera to make the markers show up.

I am having a problem with the device set configuration file in Plus Server Launcher. After choosing the device set “PlusServer: Optical marker tracker using MMF video”, I have followed the example presented here http://perk-software.cs.queensu.ca/plus/doc/nightly/user/DeviceOpticalMarkerTracker.html to create the file but when launched, an error occurs and the connection fails.
I am trying to use the built-in webcam of my laptop. Is it possible to upload a different example file for this application? I can’t figure out where is the error. Thanks!

It seems that you need help with getting Plus set up right. To get help on this, please submit an issue on Github page of Plus.

Thanks for your prompt reply!

I’ve updated the links now (Plus repositories moved since the time of the original post).

A post was split to a new topic: How to set up real-time position tracking of tools using 2D barcodes

I´m trying to make from scratch my own 3d Slicer scene and I´ve problems creating vector volume image_image. I´m following SlicerIGT tutorials but i can´t find solution. Thanks!

Normally you would just let OpenIGTLinkIF module create image and transform nodes for you, but you can certainly create them from your own Python script, too. Here is an example of creating a scalar volume node: https://www.slicer.org/wiki/Documentation/Nightly/ScriptRepository#Create_a_new_volume. You can create a vector volume node very similarly.

Thanks Andras, really usefull info.

Little more complex. How can I add a second video source to the scene?
I have 2 Plus connections with different ports for each device (18944/18945 and 18954/18955). Both successfully connected. I´ve added in OpenIGTLinkIF 2 connectors for the second camera and config properly ports but … i have no Image2-image2 (as I´ve called in Plus xml config file) vectorvolume. OpenIGTLinkIF has not create input volume nor transforms from camera and tracker.
Is there any way to create them without code?
Thanks on advance!

This should be no problem, if you receive images with different names then the nodes should be created in the scene automatically. If this is still a problem then create a new topic and provide more details (the full Plus config file, etc.).