How can I transfer my custom ultrasound data from python to 3Dslicer through IGT

Hi, everyone.

I am conducting an academic research and want to use 3DSlicer to reconstruct the ultrasound image from 2D to 3D.

I only have offline ultrasound images (both DICOM and .png) and 6-DOF coordinate for each ultrasound image, and I do not have the ultrasound devices and positioning equipment such as OptiTrack which is recommended in the IGT tutorial. But I found an IGT Python implementation from Lassoan. This Python code has two functions pyigtl.PositionMessage and pyigtl.ImageMessage, which is match with my offline ultrasound data.

Now, the question is how to connect the Python to 3DSlicer through IGT to realize the 3D reconstruction of ultrasonic images? I’ve seen some slicerIGT tutorials in and it seems like to fetch data from outside you have to use plus server and prepare a configuration file (.xml) for that.

I am now confused with how to write this configuration file right , any good idea?

In addition to preparing the correct .XML file, what other operations do I need to do to finally connect and transmit data correctly from Python to Slicer?

Furthermore, if I transmit the ultrasonic data correctly to the Slicer, whether the remaining steps of 3D ultrasonic reconstruction are the same as this video, whitch can be implemented by calling Plus Remote, and the Volume Reconstruction module?

Very appreciate!

I would recommend to create an ultrasound sequence file (igs.mha or igs.nrrd), which contains the coordinate system for each ultrasound image and the all the images. You can load these files into Slicer and use the Volume reconstructor module to paste the frames into a 3D volume.

The specification of the file format is available here. You can find many examples here.

OK,thanks! I wiil try it.
I have read some documentation of SlicerIGT and OpenIGTLink before, but not getting anywhere.

The IGT Python implementation , I used pyigtl.ImageMessage successfully, but pyigtl.PositionMessage failed.

In pyigtl.ImageMessage , I can see the moving circle image in the 3Dslicer, which has a device_name of “Image”. But in pyigtl.PositionMessage , I see nothing in the 3Dslicer, also the device_name of “Position” is not show.

The documentation of plus toolkit which I have not read yet.

If you want to reconstruct already recorded data then it is simpler to create a .igs.nrrd file. Streaming to Slicer via openigtlink is needed if you want to reconstruct live data.

The tutorials you gave me yesterday seems like create the .mha file through plus toolkit. The tutorials are mostly based on c++, which I am not familiar with.
If I take my ultrasound images and 3D location data to create a .nrrd file, is there a way to do it in python?