I’m receiving dicom data from a websocket connection and i’m able to display it in a volume node. Thing is, now i’m receive dicom data from a temporal serie and i want to display it as a multiVolumeNode.
I suppose i need to fill a vtkMRMLMultiVolumeNode with several nodes and then create a vtkMRMLMultiVolumeDisplayNode, but i did not found any documentation and the member functions available on those objects are not really what i expected.
Does anyone could give me some guidelines to proceed ?
Thanks for the information, after reading and re-reading this code (the multivolume importer one) there is something i still don’t understand.
Seems to me that the point of the onImportButtonClicked function is to fill the mvImageArray with the data of each frame (and then put it in some node). This is done line 259 :
for frameId in range(nFrames):
# TODO: check consistent size and orientation!
frame = frames[frameId]
frameImage = frame.GetImageData()
frameImageArray = vtk.util.numpy_support.vtk_to_numpy(frameImage.GetPointData().GetScalars())
mvImageArray.T[frameId] = frameImageArray
Once complete mvImageArray is not used anywhere else, how come ?
The mvImageArray is actually a numpy view of the vtkImageData that stores the multivolume, so when the image data is assigned into the frame it becomes available as part of the mrml scene.
The python VTK API code will translate directly to C++ and the numpy assignments need to become memcpy calls or similar, so you should be able to follow through the python code to get what you need. A few lines up in the file you’ll see that part that sets the extent and allocates the memory for the vtkImageData based on the number of frames (yes, the frames are ‘components’ of the same image data).
You can use Sequences extension’s recording capability for creating time sequences in real-time. We use it extensively for recording and replaying ultrasound images and tool positions in real-time.
All you need to do is to create a volume node, set it as a proxy node in “Sequence browser” module, and start recording. If you update the volume node, the new item will be automatically added to the sequence. You can start/stop recording, replay, save to nrrd or mkv file, etc. Compressed video streams are supported, too.
Note that Slicer can already receive images, transforms, models, points, strings, etc. through OpenIGTLink, a very simple socket-based protocol. If you choose this protocol then you don’t need to implement anything, you can already receive, display, record, replay, save real-time image and other data (using SlicerOpenIGTLink, SlicerIGSIO, and SlicerIGT modules).
Plus toolkit can connect to a wide range of devices and send data through OpenIGTLink protocol to Slicer. So, maybe a solution is already available for your needs. What kind of images do you receive, from what device?
got me a compilation error : undefined reference to `vtkMRMLMultiVolumeNode::SetNumberOfFrames(int)’
I correctly set #include <vtkMRMLMultiVolumeNode.h> and this file is located in the slicer-super-build as i think it should be. There is no problem or warning with the include so I’m quite helpless.
These functions are somehow private ? In the python example i did not notice any special thing with them.
MultiVolume has very limited capabilities, does not work for IGT applications, and will be deprecated within 1-2 years, so instead of trying to fix the MultiVolume based solution, I would recommend use Sequences infrastructure for your project.