Loading volume of several hundred GB

Hi all,

I feel like this topic should have come up over the years but I couldn’t find anything (search on discourse is awful…).

What options do we have in Slicer if we want to load a volume of industrial CT that is several hundred GB large? Typically these images are stored as TIFF stacks with some metadata. It is possible to pack that much memory in the computer, but what if we don’t have all the memory?

Thank you very much!

ImageStacks module can handle such images: it can load the entire volume at lower resolution so that the user can choose the region of interest and load that ROI at full resolution. The main limitation is usability (requires a several clicks and some wait each time the user wants to inspect another ROI) and that preview image quality is not optimal (because downsampling is done without a low-pass filter).

These limitations could be addressed by using a multiresolution file format. BigImage extension is a prototype that demonstrates how easy it is to implement panning/zooming of arbitrarily large images in Slicer using Zarr. It would not be that hard to implement a similar mechanism for volume rendering (request resolution and image region from Zarr depending on what part of the volume is visible; all can be implemented in Python, without any Slicer core change).

ITK has increasingly better support for reading/writing/processing of such images, so some processing operations may be feasible, too.

1 Like

Thank you very much Andras for the super quick answer!

What about the segmentation of such datasets?

Most common segmentation representations are labelmap and closed surface. Both can be rendered very quickly if stored in a multiresolution format. The challenging part is the processing: how to quickly modify a multiresolution labelmap image and create closed surface mesh from it.

In 2D it is trivial, as you can simply store segmentations in polygons. But I am not sure if there is a simple general solution in 3D. Maybe displaying the segmentation in 3D using volume rendering would simplify things, but then we would need to be able to solve the problem of computing smooth gradients for labelmaps.

I meant more in context of actual segmentation and memory consumption. Would you create the closed surface representation at the finest resolution the multiscale provides, or at the level data shown on the screen? If the user zooms in to the next level, does the labelmap automatically resample? If so, what will happen to the memory usage? Does the labelmap will be written to the disk and streamed (as opposed to stored in the memory as a single numpy array)

Ultimately, all modifications would be saved at the ground truth level. For performance and design simplification reasons it may be easier to update segmentation at the current level and write it to the ground truth level in background processing thread.

Only the displayed regions would be in memory, only at the relevant resolution. This would take care of memory usage.

Processing would need to be done in the background and may require streaming. Both ITK and VTK filters support streaming, but it is only implemented for some of the filters. Some algorithms would be difficult to adapt to streaming (e.g., region growing).

1 Like

I assume lots of this is not yet in place. It is possible that once the first phase of the project in terms of which I encountered this challenge is finished, there will be a need for a more integrated solution using multi-resolution images. If this occurs, I’ll let you know @muratmaga, maybe with joint efforts it could be done faster. My guess is that we’ll know this in a few months.