Creating 3D model from large (90GB) segmentation (NRRD volume)

So I have a nrrd volume about 90 Gb in size, I can segment this fine but converting the segmentation to model via export would throw the error of out of memory ( although I have plenty of memory left (tried on one machine with 512gb memory and another one with 128gb but excessive 512 virtual memory set).
Is this some known limitation of the vtk converter module eventually?

Second approach I tried is to split the nrrd into two halves that slightly overlap. Thinking I could segment and convert to model with the exact same workflow/settings this would give me to geometries that I can flawlessy merge afterwards. Apparently the overlapping geometry parts exhibit a different topology though so merging wouldnt work.

So it seems the vtk conversion isnt mathematically accurate in creating the very same model twice of the same source volume? Is that the case? if so, why?

It probably takes more memory than you think to perform the conversion. There are many options like changing the smoothing parameters, using surface nets instead of the default pipeline, etc. Try downsampling the volume to see what sizes can be handled by your machine (either CropVolume on the source data or change the segmentation resolution).

This is probably due to the smoothing and decimation operations that operate differently with different boundary conditions. You could turn off all smoothing and they should match. Then maybe you can merge and smooth the result but it may not be a great approach.

How do you know you have plenty of memory left? Did you trace the Slicer’s memory consumption during the process? Our rule of thumb is 4-6 times more memory than your dataset size, and even your 512GB computer is below that requirement.

Does your segmentation encompass the whole volume? If not, then instead of splitting into two parts, crop the volume to its minimum extend (you can use the SplitVolume in SegmentEditorExtraEffects, if you don’t know how to do that) and then try converting the model.

Also, often downsampling by factor of 2 has almost no significant effect on the model’s geometry and works much faster.

Thank you dearly for the reply and suggestions.

How or where could I disable the smoothing in this case? Which settings are used, the 3D Visualization ones or the surface creation from labelmaps?

I saw the available virtual memory wasnt used fully in the process, tried multiple times and with different 3D Slicer builds too.

I already am at the lowest desired resolution, cropped tightly to the necessary boundaries of the volume too.

My goal is to reduce visible the voxel stepping but still maintaining the tiniest possible details.

So is there some different approach to this, like splitting the segmentation instead of the volume and generate the geometry from there, ensuring a coherent topology?

Thanks again, both of you.

I think you can empirically test this very easily. Crop out a small region where you have the smallest detail you want to preserve, then create different downsampled volumes and extract models out of them and see if the difference is noticeable to you.

These are controlled under the Show3D button of segment editor. Segment editor — 3D Slicer documentation

Did that already, I can deal with the slight stepping.

I managed to get the model out in two parts, just the flawless merging didnt work as I hoped. Ill check again with any smoothing disabled and see if that holds the topology intact despite the splitting

If you want full control over model generation, you can export your segmentation as a labelmap and run the Model Maker module on it:

https://slicer.readthedocs.io/en/5.6/user_guide/modules/modelmaker.html

We might have more suggestions if you send screenshots or more description of the source data.

if you are just thresholding you might try the GrayscaleModelMaker module. If there are parts you don’t want to include you can use masking in the Segment Editor.

Ill prepare a detailed explanation with screens.
Thanks for the help and effort!

It is expected to have some artifacts near the data boundary, as algorithms often need information in a small neighborhood around the processed position. Near the dataset’s boundary, algorithms extrapolate the data, but of course the extrapolation is not exactly the same as the real data.

A common solution is to use “ghost cells”: you extend your data a little bit beyond the region of interest (these extra data elements are the “ghost cells”), perform the processing, and then clip your results to the region of interest.

Your workflow could be: cut up your data set into 2-3 pieces, make each piece overlap a little bit with its neighbor, process each piece, then cut off the overlapping regions from each processing result (e.g., Dynamic modeler module’s clipping tools). If you need to remove seams between the pieces then you can merge the models (e.g., use append polydata filter to put all pieces into one data object, and then use clean polydata filter to merge coincident points). All these steps can be Python scripted, which allows you to partition the data into many pieces (not just cut into 2, but for example cut into 3x3x3 = 27 pieces).

Thats exactly what I did/tried, the excessive overlap deleted but still the remainders wouldnt perfectly align in topology.

Ill prepare more info and let you know.

So this is what I get when I use the segmentation to 3d model workflow. Theres lots of memory left in the process. (Yes, other processes use more ram fine, so I dont assume its a faulty memory issue.)

I tried the labelmap to model maker workflow, different error but seems also memory related.

So far I didnt succeed to to create just pieces of the whole model and merge it later on.

More screens and reports soon.

I don’t think it is a faulty memory issue, but possibly a memory fragmentation problem. My understanding you can hit OOM error for the array, if there is no contiguous memory address space for the size of the array you are creating. So you might have 300GB of RAM available but if these are split in 100GB chunks, you cannot create an array of 101GB in size. Or so my basic understanding goes.

This is something operating system supposed to do it for you, and I think a simple test is to reduce your data a little bit (like 10-15% in axes) and retry. That will reduce the memory consumption almost 50%. If it works, you are really hitting a memory problem. If not, then there is something else is going on.

To try that you need to use the ResampleScalarVolume module (since crop volume only resamples in integer factors).

“Crop volume” module uses “Resample Scalar/Vector/DWI Volume” module under the hood and supports scaling by any factor (not only integer). The main advantage of using “Crop volume” module is that it performs the two most common methods for reducing image size (cut off irrelevant parts of the image + lower the resolution) in one step.

True. I keep forgetting about that.
@Tyler if you want to downsample 15% you can put factor of 1.15 in CropVolume. It is easier to use…

So this is the result of using label maps to model maker workflow for both halves with same exact settings.
I assume the smoothing is causing the difference again.

I am now testing the downsampling again, although I fear I will loose surface definition…

…and Resample Scalar/Vector/DWI Volume terminated with an unknown exception…

Well all I can say, if you share your scene with the data, and the specific steps you are doing, I can try to run it on a larger memory computer.

OK, I can replicate this crash in a system with 1TB RAM. While it is throwing a memory error, probably something else causing this. I am not sure if anyone will be able to debug this

However your other memory crashes are probably indeed out of memory errors.

  1. I took your data (~2.5GB) and designate is as the master volume.
  2. Over sampled the segmentation geometry by factor of 4. (during this time memory usage of Slicer reached as high as 300GB).
  3. Created a segment using threshold range of 12750-Max. During this memory usage of Slicer reached 420-460GB range transiently. When the task was finished Slicer’s memory usage was around 350GB range.
  4. I went to Segmentation module, and choose to export the model to scene. It worked for while with memory usage hovering between 350-450GB, and then Slicer crashed with this error.

As you can see at times you are coming already close to the physical limits of the memory on your system. So resampling and other may be indeed crashing due to OOM. However, this (crash during model export) does seem like a real issue.

@lassoan

Probably this is still an out of memory issue. Even with 1TB of memory some allocations may fail due to fragmentation. You can try with an even bigger memory machine (google and amazon have machines with like 11 TB or so) or you can just add a bunch of swap space.

If it still fails I suggest building in debug and getting a stack trace of the failed allocation.