Automatically create sequence

Hello, I’ve installed the sequence extension and its work fine.

Currently i’m working on a module to receive via a specific siemens API, MRI images (dicom format) and displaying them in real time.

1) First question :

I’m displaying data on a node, so setting it as a proxy node works fine, but i would like to automate it so that when the first data arrive a new sequence is created and the proxy node is setted. (The node is create as the first data arrives).

I found in the sequence browser logic source code promising function, but i did not manage to access it from my own module, and i’m not sure it’s the right way to do it. What do you recommend ?

2) Second question :

I’ve also tried building multivolume wich works quite fine but need a few more processing for the data. I’ve tried both solutions (multi and sequence) and they seems to give equal result (for displaying) for 10 images/sec (I display volume (ie do a “SetAndObserveImageData”) only when full, so roughtly two by second). But when i tried with a higher image rate like 20 images/sec the slicer app freeze or crash.

-Is it to be expected ? I’m not sure how the render is done in slicer, so i’m not sure how much stress i put on the app.
-Doing a SetAndObserveImageData on the volume or multivolume node is the right way for this ? I saw that streamNode exist but didn’t find much doc on it.
-Should I use a timer to do a display at a fixed rate, not matter the rate of images received?

That a lot of questions, thank you for your assistance.

Can you share a code snippet of what you tried in your own module, and any resulting error(s)?

I did something like this after including vtkSlicerSequenceBrowserLogic.h :

qSlicerAbstractCoreModule* SequenceBrowserModule =
qSlicerCoreApplication::application()->moduleManager()->module(“SequenceBrowser”);
if (!SequenceBrowserModule)
{
qCritical() << “SequenceBrowserModule module is not found”;
return;
}
vtkSlicerSequenceBrowserLogic* SequenceBrowserLogic =
vtkSlicerSequenceBrowserLogic::SafeDownCast(SequenceBrowserModule->logic());
if (!SequenceBrowserLogic)
{
qCritical() << “SequenceBrowserModule logic is not found”;
return;
}

SequenceBrowserLogic->“AnyFunction”

Compilation said i could not access to logic. Maybe it’s just a cmake problem, my project have 3 subdirectory : mine (SiCo) , sequences and sequenceBrowser.

So i think that this part is a CMake issue. In order to use the logic of another loadable modules, you need to link them in the CMake. See https://github.com/Slicer/Slicer/blob/659f1c0b8f75e8c3358e9bddd3e04cc6ee179375/Modules/Loadable/CropVolume/Logic/CMakeLists.txt#L20 for an example.

Here’s an example for how to use a module logic class from another module logic class:


Just search for this class name. You will find that the instance is set from the module class at setup time.

It may be simpler to just activate sequence recording (create a sequence browser node, set the input proxy node, and enable recording). Then, whenever you receive a new volume, you just update the proxy volume with it. The new volume will be automatically added to the sequence. You don’t need to call sequence module logic functions at all.

This is how we record live images and tool tracking information in all real-time navigation applications. See www.slicerigt.org for examples and tutorials.

It may be simpler to just activate sequence recording (create a sequence browser node, set the input proxy node, and enable recording). Then, whenever you receive a new volume, you just update the proxy volume with it. The new volume will be automatically added to the sequence. You don’t need to call sequence module logic functions at all.

I did try it and it work well, but the thing is that before the transmission begin there no nodes to set as proxy. The node where the data is displayed is automatically created as we received data. So during the transmission I need to set the proxy node and record, which is not really convenient, and doing this I miss the beginning of the transmission.

I really can’t create the node before receiving the transmission, because i can received an unknown amount of serie data and i need to create one node per serie data.

Do you use OpenIGTLink? Which module creates the node automatically?

You only need to create one proxy node for an entire sequence.

We also have an OpenIGTLink implementation for receiving real-time images from Siemens MRI scanners and also control the scan plane position and orientation from Slicer. Probably it uses the same protocol that you do. If interested, contact @tokjun.

Thank you for your answer,

The module that create the node is the one i am currently developping. It is a little bit similar than OpenIGTLink but the communication with the MRI is via a specific API using http protocol to open a websocket connection.

Then i receive dicom images of module, phase and temperature. I sort them on the fly and display reconstructed volumes, tring to do it in real time.

I did manage to create sequence and record automatically. But i’m still working on second question : when the rate of display is high (something like displaying 2 volumes of 5 slices every 250 ms) the app crash or freeze. In the sequence solution i’m using regular vtkMRMLScalarVolumeNode updated by a SetAndObserveImageData .

I don’t know if it adapted to this kind of stream, i saw that a vtkMRMLStreamingVolumeNode exist, could you tell me if more adapted and if there is difficulties to use it ?

If processing takes too long time then response time of the application may get worse but crash should never happen. Crash indicates that there is a bug in your module. Do you receive the images on a different thread? Do you use mutexes to communicate with the main thread? I see two potential solutions:

  • Option A: The simple one. Create a small standalone application that receives images from Siemens and sends it to Slicer through OpenIGTLink. You only need to link your small application to OpenIGTLink library. No need to use threading! Slicer’s OpenIGTLinkIF module can already receive images (and real-time tracking, sensor, etc. information) on a background thread and pass it to the main thread robustly and efficiently. This is how we usually implement real-time communication with hardware devices, which use proprietary communication protocol and not OpenIGTLink interface.
  • Option B: The complicated one. Create a loadable module, which communicates with the scanner on background threads. When you you received an image, put it in a thread-safe queue and signal the main thread that you have received new data. In the main thread, retrieve the dataset from the queue and set it in a node in the scene.

vtkMRMLStreamingVolumeNode is only useful if you receive compressed data stream and you want to store the compressed stream while viewing the uncompressed images on screen.

What is the name and version of the protocol that you use with the Siemens scanner? As I wrote above, we may already have a working solution.

@tokjun - can you give some information about the Siemens MRI real-time image receving/scan plane control interface that you developed?

Thank you for this detailed answer, the web socket communication is in separate tread. I may need a few more mutexes indeed.

The protocol used is SRC (Scanner Remote Toolkit), i read OpenIGTLInk Doc and I did’nt found anything about it. Also a possible issue is that i received images of Module, Phase and Temperature, in a mixed order, that’s the main reason i developped my own module, i need to sort them as i receive them.

I’ve contacted tokjun and i’m waiting an anwser.