We recently had a new detector installed in our micro-CT machine that has resulted in significantly higher dataset sizes (from 0.5-1.0GB to 10GB). While this is really nice from a resolution and dynamic range point of view, working with big datasets has its challenges.
Some studies need the full dynamic range (16 bit) and just need to use a beefy computer to work with the data, or crop off areas to minimize dimensions, etc. But there are a number of studies where you only care about simple thresholds - bone vs matrix, air vs tissue, etc. For these datasets, 8 bit data would be a big space saver and would likely not impact results.
I’ve been looking at using some of the unu utilities (quantize, histo, etc) with the idea of establishing the range of absorbance values to best preserve dynamic range when converting to 8 bit to reduce dataset size. It seems fairly clear that this (quantization and binning techniques) is probably a well-traveled road. Can anyone provide pointers on how they’ve done similar tasks and added them to their workflow?
“Simple filters” module offers a number of filters intensity rescaling, which can be used for compressing dynamic range of the image. Try RescaleIntensityFilter, IntensityWindowingFilter, ShiftScaleImageFilter, etc.
You can get the currently displayed window/level (or min/max) value from Volumes module, Display section.
Going from 16 bits to 8 bits is just 2x memory reduction, though, so it might not worth the trouble. If you increase spacing along each axis by 3x then you get 27x reduction.
You can use “Cast scalar volume” module (or CastImageFilter in Simple Filters module) to change scalar type of a volume (16-bit short to 8-bit char).