Expected behavior: be able to draw a dot on top of the RGB camera feed that is input to the slicer module using plus server. Imagine being able to draw a dot on the video feed from this tutorial (https://www.youtube.com/watch?v=MOqh6wgOOYs).
Actual behavior: not able to access the video feed and therefore not able to draw a dot.
You have access to the video feed in both Plus and Slicer. Where you would like to access the image data?
If you want to access the video feed to burn in the dot, then I would recommend a much better solution: Specify a transform (either in Plus or Slicer) that specifies marker position in the image coordinate system. Then in Slicer, you apply that transform to a small sphere or a markup fiducial point (no programming is needed).
We would just like to access the stream from Slicer as it is where we establish our transformation pipeline. Is there more specific documentation or command you can guide us to for accessing the video stream?
In this demo video that we have (here), we put a fiducial on the CT and want to relay that information on the video stream (bottom left window) by leveraging the Aruco marker cube that exists in both CT and real world. However, we were unsuccessful in doing that, which is why our current solution is launching an OpenCV window (the black window in the middle, bottom of the view).
Do you use Plus toolkit? It can take care all the tracking, using basically any tracker - from inexpensive Aruco-based tracking to commercial surgical navigation systems.
My team members have attempted to perform the steps you mentioned and these are their follow up questions.
You mentioned that we have access to the video feed. We are trying to develop a custom module (so our own python script) which we can run on Slicer and we would like to grab frames from the video feed using the module and then update them/overlay them with something. What would be the best method to do this?
You had also mentioned another solution in which we specify a transform which “specifies marker position in the image coordinate system”. How exactly do we do this? So far, we have a created a new transform and linked it to a sphere model. As Tina mentioned earlier, we would like a dot to appear on top of our video feed whose position would be updated programmatically in our custom module. By playing around with the "Transforms"module in slicer, we understand that we have the ability to visualize the transform in the form of an array of arrows, spheres, etc but what we are after is a dot appearing in a specific pixel position which we set through our module.
Below is what we were able to display with the Transforms module. The red slice shows our camera feed. This is where we would like our dot to appear. Right now, as you can see from the array of spheres, is just showing the visualization of the transform.
Thank you so much for your time. We hope to hear from you soon.
I would not recommend to burn in modifications into the pixel data, as you can display objects overlaid in slice and 3D views instead. But of course you have easy access to the pixel data from Python (as a numpy array), which allows you to do anything: see Documentation/Nightly/Developers/Python scripting - Slicer Wiki
You can create a model node that contains a small sphere marker (see this example) and apply the transform to this model to set its position. To make the sphere appear in the slice view, enable slice intersection display in the model’s display node.