Segmenting out of border of main volume

Hello, I want to modify my segmentation by adding some details that are out of the borders of my main volume. However, it seems that the segment editor does not draw anything out of the borders of my master volume. Can anyone help me to solve this problem?
Thanks in advance.

See this documentation section for instructions: Segment editor — 3D Slicer documentation

1 Like

Thank you very much @lassoan for your help. I used crop volume to extend the borders of my current volume by increasing the area in ROI. The final volume was successfully generated. The size of the main volume file was 400 Mb and the size of the new volume if 600Mb and it is acceptable I think. However, when I want to paint in the segment editor, I get the following error:
image

It is really weird to me because I have about 100 GB of RAM and the fact that the size of both volumes are close to each other tells me there should not be drastic changes in calculation time. Can you please let me know your comments in this regard?
Thanks in advance

What was the size and scalar type of the volume that Crop volume module created? Could you copy-paste a screenshot of Volume’s module Volume Information section?

Thanks for your response @lassoan. Here is the information of my new volume:
image

I found that 40Gb of my RAM is occupied when I just load the edited volume.
Are there any possibilities to solve this problem?
I do not want to reduce the quality and resolution of my current segmentation and it is an important factor for me.

Size of this segmentation is 40GB. So, you will not get far with just 100GB RAM, because master volume will be resampled to match the same geometry (there goes another 40GB), while you are editing, previous states are saved to allow undo (which, depending on the size of the segments may take up to 40GB each). If you want to be sure that you don’t run out of memory when processing a data set, I would recommend to have at least 400GB memory space. Ideally, all physical RAM (for full-speed access), but if that is not feasible then you can just allocate more swap space. Also make sure you use latest Slicer Stable or Preview Release, as they contain memory usage optimizations for multiple segments (non-overlapping segments share a single labelmap).

However, to keep both speed memory usage under control, it is much better if you can reduce the size of the segmentation by increasing Spacing scale in Crop volume module. Spacing scale of 2 will decrease memory usage by a factor of 8.

Thank you very much for your response @lassoan.

1 Like

I just had an idea. All segments could share a single labelmap. This can be done if each segment HU value is a prime number and overlapping segments HU value is the multiplication of the involved segments HU values. A lookup table would be updated when a new segment is created to know how to decode if there is overlap with other segments for visualization. So this would be computationally cheap.

The idea came up from Gödel numbering.

Do you think this is feasible @lassoan?

EDIT: here is an example

Suppose there are 3 segments each with a prime number HU value and that they overlap:
segment1 → HU: 2
segment2 → HU: 3
segment3 → HU: 5

So the labelmap would save those HU values where there is no overlap. Where there is overlap it would be:
segment1 AND segment2 → HU: 2x3 = 6
segment1 AND segment3 → HU: 2x5 = 10
segment2 AND segment3 → HU: 3x5 = 15
segment1 AND segment2 AND segment3 → HU: 2x3x5 = 30

The prime factorization of the overlapping segments HU values can be saved on a lookupTable to know what are the segments that are overlapping for a given HU value.

This already works like this. Segments are organized into layers and within the layer we can store hundreds of non-overlapping segments. If a segment gets overlapping with others then it is moved into a new layer.

The prime factorization is a cool idea, but there would be some practical issues: performance could suffer and you would still need to keep a simple labelmap in memory so that you can process and visualize the data, because none of the existing algorithms could work with a volume encoded such way.

The prime factorization is a cool idea, but there would be some practical issues: performance could suffer and you would still need to keep a simple labelmap in memory so that you can process and visualize the data, because none of the existing algorithms could work with a volume encoded such way.

I understand that algorithms should be adapted to work with this encoding.

But why performance could suffer? Searching for a factorization on a lookupTable is much faster than executing a factorization algorithm? Although I think this encoding may it not suitable for real-time usage of the data like visualization.

Would at least this encoding be useful for storage of the labelmap on the disk (saving up space)?

Accessing a segment in a labelmap stored in layers (the current implementation) is very fast because for each voxel you only need to perform a single integer comparison.

If you store voxels using prime factorization then for each voxel you need to do a table lookup to get the list of labels at that position and then a search for a label value in that list. I would guess that these would take at least a magnitude longer time than a single comparison operator.

For storage efficient efficiency, we use zlib to reduce size of the 4D (3D+layers) labelmap. It uses Huffman coding, which may result in better compression than a basic factorization, and since zlib implementation is quite well optimized, it is probably faster than a custom compression.

But even if it turns out that you can achieve higher compression ratio on some data sets with some special compression method, it would be still unlikely to become popular, because saving a little disk space would rarely justify the extra complexity and compatibility issues. Optimized zlib is accessible on all platforms and languages, while a new algorithm would take a lot of efforts to develop, optimize, and make widely available. Even incredibly powerful companies have hard time introducing new compression schemes (see how Google struggles with making webp popular).

Thank you Andras for such a great answer