Manually creating a multiVolume

Hello world,

I’m receiving dicom data from a websocket connection and i’m able to display it in a volume node. Thing is, now i’m receive dicom data from a temporal serie and i want to display it as a multiVolumeNode.

I suppose i need to fill a vtkMRMLMultiVolumeNode with several nodes and then create a vtkMRMLMultiVolumeDisplayNode, but i did not found any documentation and the member functions available on those objects are not really what i expected.

Does anyone could give me some guidelines to proceed ?

Thanks a lot!

Hi -

Probably the best is to look at the multivolume importer code and follow that pattern.

Or you could consider using Sequences which can also handle time series volumes, but also time series transforms, fiducials, and other data types.


1 Like

Thanks for the information, after reading and re-reading this code (the multivolume importer one) there is something i still don’t understand.

Seems to me that the point of the onImportButtonClicked function is to fill the mvImageArray with the data of each frame (and then put it in some node). This is done line 259 :

for frameId in range(nFrames):
# TODO: check consistent size and orientation!
frame = frames[frameId]
frameImage = frame.GetImageData()
frameImageArray = vtk.util.numpy_support.vtk_to_numpy(frameImage.GetPointData().GetScalars())
mvImageArray.T[frameId] = frameImageArray

Once complete mvImageArray is not used anywhere else, how come ?

The mvImageArray is actually a numpy view of the vtkImageData that stores the multivolume, so when the image data is assigned into the frame it becomes available as part of the mrml scene.

Sorry but i’m not really good at python and i’m a doing a loadable module in C++

So i’m not sure of what does the line and what a c++ equivalent should be :

mvImageArray = vtk.util.numpy_support.vtk_to_numpy(mvImage.GetPointData().GetScalars())

The multivolume is stored in only one vtkImageData ? You don’t need a array of it to store the several volumes ?

The python VTK API code will translate directly to C++ and the numpy assignments need to become memcpy calls or similar, so you should be able to follow through the python code to get what you need. A few lines up in the file you’ll see that part that sets the extent and allocates the memory for the vtkImageData based on the number of frames (yes, the frames are ‘components’ of the same image data).

You can use Sequences extension’s recording capability for creating time sequences in real-time. We use it extensively for recording and replaying ultrasound images and tool positions in real-time.

All you need to do is to create a volume node, set it as a proxy node in “Sequence browser” module, and start recording. If you update the volume node, the new item will be automatically added to the sequence. You can start/stop recording, replay, save to nrrd or mkv file, etc. Compressed video streams are supported, too.

Note that Slicer can already receive images, transforms, models, points, strings, etc. through OpenIGTLink, a very simple socket-based protocol. If you choose this protocol then you don’t need to implement anything, you can already receive, display, record, replay, save real-time image and other data (using SlicerOpenIGTLink, SlicerIGSIO, and SlicerIGT modules).

Plus toolkit can connect to a wide range of devices and send data through OpenIGTLink protocol to Slicer. So, maybe a solution is already available for your needs. What kind of images do you receive, from what device?

Thank you for your answer, I already dug out the IGT and Plus toolkit solution but I receive dicom files straight from a MRI via a specific API.

I think i’m almost there for the multiVolume solution. I may have a look to the sequence solution also.

For now i think, i correctly set up frames, image data, component and so on.

But i can’t access any specific function from the vtkMRMLMultiVolumeNode Class.
For instance :


got me a compilation error : undefined reference to `vtkMRMLMultiVolumeNode::SetNumberOfFrames(int)’

I correctly set #include <vtkMRMLMultiVolumeNode.h> and this file is located in the slicer-super-build as i think it should be. There is no problem or warning with the include so I’m quite helpless.

These functions are somehow private ? In the python example i did not notice any special thing with them.

MultiVolume has very limited capabilities, does not work for IGT applications, and will be deprecated within 1-2 years, so instead of trying to fix the MultiVolume based solution, I would recommend use Sequences infrastructure for your project.