Sending out of OpenIGTLink messages in response to MRML node changes are implemented here:
You could probably leave this code mostly as is (just add an option to send output messages asynchronously), and update the SendMessage method implementation in OpenIGTLinkIO to support asynchronous sending (make a copy of the message buffer, put it in a queue, and in a separate thread get the items from the queue and send them).
Before you start this, you need to build Slicer and SlicerOpenIGTLink extension. It would also make sense to do some profiling and fix the crash before you start developing the asynchronous message sending, so probably a RelWithDebInfo mode build would make the most suitable.
Hi,
youâre absolutely on the right track, and your observations are valid.
To stream a slice (like the red slice) from 3D Slicer to another device, OpenIGTLink is indeed the standard method. Youâre correct that vtkMRMLIGTLConnectorNode with RegisterOutgoingMRMLNode is used to transmit data, but image slice streaming isnât as straightforward as, for example, sending transforms or tracking data.
A few clarifications:
Sending image slices: Thereâs no built-in MRML node specifically for individual slice views (like red/yellow/green) that directly maps to an OpenIGTLink ImageMessage. Typically, volume nodes (like vtkMRMLScalarVolumeNode) are used to send 3D data. If you want to send a slice, youâd need to extract the 2D image data from the view, convert it into a format suitable for OpenIGTLink, and send it manually using pyIGTLink or your own OpenIGTLink integration.
Your approach with pyIGTLink is sensible if you want finer control, especially over 2D ImageMessages. You will need to capture the slice as a 2D image (e.g., using vtkImageReslice or extracting it from the slice view), then send it as a serialized ImageMessage.
Performance concerns over WLAN are justified. Streaming raw image data at full resolution and frequency can be too heavy for wireless connections, particularly for mobile receivers. Without compression, latency and performance will suffer.
Video streaming with compression (e.g. VP9): Youâre right again that such features arenât available in the Python layer. If youâre aiming for efficient, compressed video streaming, then yes â switching to C++ and extending Slicer to leverage something like OpenIGTLinkâs video streaming support with real-time compression is the way to go. It would involve capturing the rendered slice image, encoding it (e.g. using libvpx for VP9), and sending it over OpenIGTLink as a VideoMessage.
Summary:
pyIGTLink is a valid choice for simple image sending.
For real-time, compressed video streaming, C++ development is likely necessary.
You might also want to look into screen capture of the slice view followed by encoding if 2D visual stream (not raw data) is enough for your application.
Let me know if youâd like a basic example of extracting a 2D slice image in Python â itâs a good first step before diving into C++.