When shrinking images you need to be concerned about aliasing effects. The slicer CropVolume feature does not use an anti-aliasing filter. You can see this in the image below. The source image is a 512x512x512 zone plate created with the Python code at the bottom of the post. The left panel shows the original image. The upper right shows Slicer’s CropVolume with a x4 scale factor and linear interpolation. The lower right shows MRIcroGL’s Import/Tools/Resize with the same scale factor and interpolation, but using an anti-aliased filter. Note that MRIcroGL also allows you to reduce precision, e.g. from 32-bits per voxel to 8-bits. Reducing precision works well for some modalities (e.g. many MRI scans, or CT scan where you are only interested in bone), but not others (e.g. CT scan where you want to preserve soft tissue and bone).
Another way to create images that require less disk space is to remove haze. MRI and CT scans tend to exhibit a little random noise in air. If you identify the air and set them to all consistently have the same intensity as the darkest value in the image, the compression will be much more effective.
The way I do this is to use a multi-level implementation of Otsu’s Method to detect the darkest voxels. I then erode this mask by one voxel to allow for better partial voluming and gradient generation.
You can try this out using MRIcroGL. Drag and drop your NRRD file to open it, and then choose the View/RemoveHaze or View/RemoveHazeWithOptions menu item. If you are happy with the result, choose File/SaveVolume. The efficiency of this method will depend on the ratio of air to object in your image. If you want to combine both resizing and removing haze, I would remove haze as the last step. The one issue with removing haze is that it can disrupt intensity homogeneity correction and mixed-Gaussian models for segmentation, such as employed by SPM. However, given your aggressive size reduction, I do not think those are your intended application.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# python3 d_speed_size.py : test speed/compression for folder 'corpus'
# python3 d_speed_size.py indir : test speed/compression for folder 'indir'
import math
import numpy as np
import nibabel as nib
nvox = 256
print(nvox)
img = np.zeros((nvox, nvox, nvox))
center = (img.shape[0]/2.,img.shape[1]/2.,img.shape[2]/2.)
grid_x, grid_y,grid_z = np.mgrid[0:img.shape[0], 0:img.shape[1], 0:img.shape[2]]
grid_x = grid_x - center[0]
grid_y = grid_y - center[1]
grid_z = grid_z - center[2]
img = np.sqrt (np.square(grid_x) + np.square(grid_y) + np.square(grid_z))
img = np.reshape(img, (nvox, nvox, nvox))
header = nib.Nifti1Header()
affine = np.array([[1,0,0,-center[0]],[0,1,0,-center[1]],[0,0,1,-center[2]],[0,0,0,1] ])
nii = nib.Nifti1Image(img, affine, header)
nib.save(nii, 'distance.nii')
km = 0.7*math.pi
rm = max(center)
w = rm/10.0;
with np.nditer(img, op_flags=['readwrite']) as it:
for vox in it:
term1 = math.sin( (km * math.pow(vox,2)) / (2 * rm) )
term2 = 0.5*np.tanh((rm - vox)/w) + 0.5;
vox[...] = term1 * term2;
nii = nib.Nifti1Image(img, affine, header)
nib.save(nii, 'zoneplate.nii')