You can connect Slicer to a C-arm in several ways.
Capture video output
You can connect a framegrabber to your computer and capture images from the C-arm’s video output. Analog output, such as S-video will be somewhat noisy and it may be lower resolution than the native resolution of the fluoro.
You don’t have direct access to the C-arm’s pose, so you either need to attach an accelerometer or have a special marker object (such as FTRAC) in the field of view. For us, accelerometers (in IMU/MARG sensors) were more accurate and much easier to work with overall, the only disadvantage is that they only provide orientation, and not position. You can also use optical trackers, surface scanners, etc. to get relative pose between C-arm and patient. Running OCR on the acquired image may work, too, but it is very specific to the C-arm software and may be quite fragile.
You can connect to various types of framegrabbers, accelerometers, optical trackers, surface scanners using Plus toolkit, which can send the data in real-time to Slicer.
Automated DICOM push
You can set up your C-arm to automatically push acquired image sequences or snapshot to a DICOM C-store storage server. If you enable DICOM C-store SCP in Slicer and set it as destination server for auto-push in your C-arm then Slicer receives all acquired images within a few seconds and it shows up in the DICOM database from where you can load it into the scene.
The advantage is that you can get much higher image quality than via analog video and metadata in the DICOM header, such as the C-arm angle, SID, SOD, etc.
There are many reasons why you would want to know the C-arm pose, such as showing surgical plan in the same orientation as the fluoro view, maybe even overlaying the plan over the fluoro, helping users to align the C-arm with the planned tool trajectory, do cone-beam volume reconstruction, etc. If you get the image via analog video output then you don’t get the angle information.
According to our experiments on a mobile C-arm, you can get better accuracy with a MARG sensor than with built-in sensors. However, getting accurate C-arm angles is not enough, because you also need to perform camera calibration, distortion correction (if an image intensifier is used and not a flat panel detector), deal with bending (sagging) and of the C-arm, etc.
Displaying a surgical plan in the same orientation as the current C-arm orientation does not require high accuracy, so that is very easily doable.
However, doing something like patient/tool registration form a few views with 1-2mm accuracy is more challenging, and cone-beam volume reconstruction with submillimeter accuracy on a mobile C-arm would be an extremely difficult task.
If we want to track C arm position so that after moving it away from the surgical scene the technician can easily bring it back to same spot. We would have to use an optical tracking system. Simple MARG sensor would be insufficient.
Also for this purpose would we have to put a tracker on the patient?
If we don’t want to put a tracker on the patient then would we have to fix the position of the camera?
Optical trackers are somewhat impractical to use with C-arms, as the detector/image intensifier where you can place the marker on is about 60-80 cam away from the isocenter. It is just hard to position the tracking camera to have tool, patient, and C-arm markers in the field of view, without occlusion. With a single-camera tracker, such as NDI Polaris, it was quite a struggle, even in phantom experiments. It is also not generally feasible to track the X-ray generator, so you may still need to do sophisticated modeling of the C-arm to compensate for bending of the C-arm.
If you want to do optical tracking then probably you need a multi-camera setup (e.g., 4 Optitrack Prime cameras) to cover the large field and be quite robust against occlusions.
In selected applications, attaching tracker camera or surface scanner to the C-arm may work, too.
What clinical application do you have in mind? Pedicle screw insertion?
This is what I meant by c arm tracking.
To easily reposition the c arm rather than tool tracking.
For tool tracking patient reference frame is must.
For the kind of c arm tracking shown in video is patient tracker required?
Also can this be solely achieved with MARG sensors?
This feature has been implemented several times by various research groups and small companies over the last 1-2 decades. It is somewhat useful but it does not justify all the inconveniences of adding a tracker. It is also interesting to note that it is trivial to implement this feature for floor/ceiling-mounted C-arms (which already have encoders in the C-arm and patient table), this feature is not commonly used in these systems either.
If you already track the C-arm for some reason (for example, you track the patient and tools) and you can manage to keep the C-arm in the field of view then it may make sense to use it for helping with the positioning. You can also replace the clumsy high-accuracy optical tracker with inside-out tracking (MARG sensor for angle; camera or surface scanner looking at the patient and/or the ceiling for translation tracking) However, since positioning is still done using manually (most mobile C-arms are not motorized), it is not a huge improvement overall.
I keep hearing from companies the importance of dose reduction, but it is rarely a concern for patients and physicians I talked to are generally not concerned by the radiation. They often seem to accept higher exposure if it allows them to save time or make things more convenient.