Dear Slicer community,
I am developing a module which includes a navigation step of a surgical robot (it is related to an older topic posted by a colleague some time ago) , and I would like to update the visualization of where we have already passed, with respect to a reference volume. We are currently mapping OPCUA position to slicer through an OpenIGTLink node as suggested in the linked topic, and we use the position to update a model.
Basically, what I am trying to achieve is the following behaviour, for visualization purposes:
- Activate the navigation of the system
- Navigate the system through a reference volume
- If the system is inside a predefined ROI, then colour the pixel to green value
- if the system pass the predefined ROI, then colour the pixel to a red value
The green and red values could be represented as a segmentation node displayed in the label layer (exactly as the Segmentation volumes).
I believe the answer rely on the Segmentation Editor module, because the type of interaction that I am thinking it is similar to the paint brush, but I would like to activate it programmatically within a python module.
I am assuming that I can do the processing without blocking the application.
I would appreciate any help or indication that you could provide.
Thanks in advance.
Davide