I need to connect a device with Slicer through TCP/IP socket connection. The device runs in real time and the data from the device need to be sent to Slicer in every certain milliseconds. Meanwhile commands will send to the device through Slicer. Could you please suggest how to implement the TCP/IP communication? Can I use OpenIGTLink? If yes, could I find any documents or examples? Shall I implement the socket connection using Qt Socket, Qt TCPServer, etc.? Or any other way?
Any suggestion is highly appreciated!
OpenIGTLink is a very simple TCP/IP protocol that is already well supported in Slicer. I would highly recommend using it, as it is not easy to implement stable and efficient multi-threaded communication in an frontend application. You can find OpenIGTLink implementations for most common programming languages (C++, Python, Java, MATLAB, etc.).
Sorry to disturb you ,Mr Lasso!
I wrote a server script in python to get the coordinates sent from slicer, but what I got was bytes, how to convert it into a string? Decode is still error.
There is no need to develop a Python script for this, because sending/receiveing transforms, markups, images, etc. to/from Slicer via TCP/IP (using OpenIGTLink protocol) is already implemented in pyigtl Python package.
Thank you!
If I need to get the coordinates sent by slicer in real time, or when the coordinates in slicer change, it will trigger the sending coordinates, how should I write this script?
Yes, Slicer sends the updated transform, image, markup, etc. automatically when there is a change. You can set it up in OpenIGTLinkIF module.
If you want to use Unity then I would recommend to start from an existing project, there are a couple of them that show the Slicer scene, allows moving nodes by grabbing and moving them physically, reslice images, etc. Search for augmented reality and HoloLens in this forum for more information.
It did not make sense to add the POSITION message to Slicer, as it just barely smaller than a TRANSFORM message and way more limited. I would recommend to use the TRANSFORM message instead.
I want to receive the image from slicer through Python and display it. My code is like this. I can get the message, but how can I filter out the array and display it as a picture?
I want to transfer the image in slicer to other software in real time.
For example, I develop a mobile phone software based on TCP, and then I can operate it on my computer and transmit the image to my phone in real time or Hololens device.
For HoloLens there are already full-feature open-source augmented reality applications that you can use as is, or customize/extend to match your needs. See for example: