I’m running into issues when I try to work with volumes that require a transform to fix spacing issues between images (although sometimes the “differences detected” in spacing is something like 0.026 to 0.026). Everything i try to do–crop volume, use the segment editor–crashes the entire application or I get a message that the application has run out of memory (I don’t get this when i work with old datasets of mine.)
It seems that the data set is corrupted or has some unexpected properties. If you can provide an example data set (upload somewhere and post the link here) and instructions for reproducing the issue then we can investigate.
thanks! I’ve uploaded it to a Google Drive folder:
Thank you, it was very useful to see the data that you work with. It looks like this is a micro-CT with the usual issues (very large image, 2036x2018x268 voxels,
double voxel type, voxel values not calibrated to be in Haunsfield units, etc.), so it is useful to cast the voxel to integer, calibrate intensity, crop&resample, etc.
@muratmaga is there a tutorial that explains how to make these micro-CT images to make it more palatable for Slicer?
That said, without doing any of these tune-ups on the image, I was able to load, volume render, segment the image without issues.
Since the image size is about 4.5GB, I would recommend to have at least 45GB physical memory (RAM) in your computer. Many computers don’t have this much memory, so most likely the error message that you got about running out of memory is accurate.
Ideally, you should upgrade your system to have more physical memory, but as a workaround you can change your system settings to allocate more virtual memory. This will avoid getting out of memory error, but the software can slow down a lot. You may also use a stronger computer, with more memory to load the image and crop, downsample, cast to integer, which may reduce the image size that much that you can edit it on less powerful computers.
We tried at some point how to use simple filters to rescale intensity ranges, and possibly convert the data to 8 bit. However, this sometimes interpreted as a strict rule to work with the data in Slicer, with some poor results (like loss of intensity or detail). Though, for this dataset double sounds like a unnecessarily large data type.
Nowadays, my suggestion for people who do not have heaps of RAM and want to do segmentation in Slicer with large dataset is strategic use of
ImageStacks to only load a certain region, segment that, save the result, and start afresh with a different region to segment the new region. Although this dataset looks like dicom, so ImageStacks won’t be much of use, but the idea still applies. I tell people to benefit from the fact that the physical coordinates continue to line up even in cropped volumes and use partial (but full detail volumes) to segment.