Segmentations can get big!

Hi all (specially @lassoan and @cpinter),

I am working with a 2D segmentation. After saving it, I use Python to process the NRRD files. The size of my master volume node is 28320 x 15232, it’s a scanned histological slice with 2 µm resolution. We hace 24 segments extending pretty much all around the slice, so the size of my NRRD array is 26013 x 13644 x 24, which takes about 8 GB of RAM and I can’t use my computer anymore when I try to open the file.

Some possible solutions I can think of:

  1. Start using downsampled reference images
  2. Downsample my segmentation (how? Exporting to a new segmentation which has a smaller reference volume? Downsampling each segment individually?)

It would be nice to be able to have multiple segments in the same volume (or slice, in this case) if they don’t overlap. A typical example is a FreeSurfer segmentation. There are 192 structures that don’t overlap. A segmentation in a 142 x 140 x 177 volume uses around 650 MB, while a label map with the same information would use around 3 MB.

Do you have any suggestions?

Also, in the FreeSurfer case, if I want to see only one segment I think I have to hide the other 191 one by one. Is there a way to hide all of them at the same time?


Having many segments covering the whole volume is a worst case scenario that will certainly will be slower compared to a simple labelmap. It could be possible to define groups of segments that are not allowed to overlap and store them in a labelmap, but there are many higher-priority tasks, so probably this would not be available for at least 1-2 years.

Is it only an issue when loading/saving the segmentation?
How much RAM do you have?
How much virtual memory have you configured?
What operating system do you use?

This is the most common approach. In fact, it is always recommended to crop and/or resample your volume to a reasonable size before you start segmentation using Crop volumes module.

Resolution of all the segments in the segmentation node are the same, fixed value. It is determined from the master volume that is selected first or you can define it manually (using SetReferenceImageGeometryParameterFromVolumeNode).

Collapsing all segments into a 3D labelmap only for saving could be feasible, it would not be that much work.

If you export the segmentation to a labelmap (using Segmentations module) and save it, how big the saved file is?
Is this FreeSurfer segmentation example available somewhere?
Is the segmentation volume compressed?

Finally, an easy question! Right-click the segment in the segment list or int the Data module / Subject hierarchy tab and choose “Show only selected segments”.

If you’re reading this in 2019, and in case this can help, for my data I would typically divide my segments into left, right and others. That would make my segmentation use 1 GB of RAM instead of 8. 10 MB instead of 650, for the FreeSurfer segmentation.

Developer (me):
I can’t load it on Slicer, nor outside using Python (or at least it hangs for a long time).
I have 16 GB of RAM.
I can’t say I know what virtual memory is, sorry.
Ubuntu 16.04.2 LTS

User (colleague):
She says it takes some time to load and save, but it’s acceptable.
32 GB

Windows 7

I’ll use downsampled images as a reference from now on. My question is: how would you downsample an already existing segmentation with overlapping segments?

I meant internally, handled by the Segmentations logic. Maybe in 1-2 years :slightly_smiling_face:

The Freesurfer label map uses 500 kB in disk. It’s compressed. I haven’t tried, but I guess a compressed label map would use around the same. This is the data.

If I question is too easy to answer, I’ve probably not searched well enough! What if I want to show all of them again after that operation?

Thanks Andras!

In Segmentations module: select all segments in the segment list, right-click, “Show only selected segments”.

On Linux, virtual memory is referred to as “swap” -

Create a new segmentation with the desired resolution (set the master volume, maybe also create a segment in it) and then import all segments from the other segmentation.

Do you only have problem when loading/saving the segmentation?

For histology, the usual approach is to resample and tile at multiple zoom levels. Then you can show the whole slide overview and lazy-load only a small subset of the high resolution tiles as the user zooms in and moves around (either from separate tile files, or by mmap’ing the whole file and pulling offsets as needed). Segmentations and annotations are saved as vectors. Very similar to what map software does.

@Fernando if you haven’t looked at other software already, the keyword is “virtual slide”, and there are several tile-creation utilities and viewers around including open source, e.g. OpenSeaDragon and several in ImageJ/FIJI. I think Kitware has (or had) one too, but I don’t remember the name. In principle lazy loading and multi-level view could be done in Slicer, but it would be a fair amount of work.

At some point I think Brad Lowekamp implemented a simple CLI module that took a ROI and a filename as an input and it returned the requested part of the image as an output. It made really easy to retrieve a certain portion of a large image. I think it used the MetaIO format, which has a nice reader that can quickly extract a small portion of a huge volume.

1 Like

1 Like


➜  ~ free
              total        used        free      shared  buff/cache   available
Mem:       16353852    10332076     1450964      475936     4570812     5110556
Swap:      16698364         440    16697924

I can’t load it into Slicer, I can’t open it with Python’s library nrrd.

@ihnorton that’s exactly what I’m doing: resampling and tiles. But until now, the segmentation files at the reference resolution hadn’t got so big. Of course the ideal solution would be using that virtual slide, as in NDP.view2, but I haven’t looked into it yet. I’ll check the tools you, @lassoan and @fedorov have mentioned.

Thank you all for your answers!

It send that you only configured to have 16GB of virtual memory. I usually set virtual memory size 5-10x the size of the data set I’m processing.

I added some swap but still can’t get Slicer to open the segmentation:

➜  ~ free -h 
              total        used        free      shared  buff/cache   available
Mem:            15G        6,2G        6,5G        127M        2,9G        8,9G
Swap:           65G          0B         65G

This didn’t work. I create the segmentation, set my small volume as reference and copy the segments (same result if I already had a segment). The segments are high-resolution. I have to overwrite one of them so that it gets downsampled. Is there any way to do this programmatically? That might be easier. I tried SetReferenceImageGeometryParameterFromVolumeNode(), but that didn’t modify the segments.

Resampling does not happen immediately when you import the labelmap (as resampling may be a lossy operation or it may take a long time to compute), but only performed when it is needed (e.g., when the segment is modified). Maybe if you use Copy operator of Logical operators effect then the result is resampled. Or you can export the segment as labelmap, use one of the Resample modules to change resolution, and import it to segmentation node. We’ll make this easier in the future (see

I’m not sure how to use the Copy operator for this.

When I tried the second approach, I got an empty label map. I’ve reported the issue:

This seems to have been addressed recently: New feature: Shared labelmap segmentations.

A post was split to a new topic: Brain and skull segmentation