Volume display - Interpolate turned off by default in newest stable?

Hey all,

Newest stable looks awesome. Rock solid for me so far (MacOS).

I did notice that Interpolate is turned off by default in the Volumes module. Is this new?

Thanks!

-Hollister

:+1:

I don’t think we changed anything related to this. Is it possible that you changed this default in your application startup script (.slicerrc.py)?

@hherhold did you install the .slicerrc.py we distribute during the slicerMorph workshop? That does disable the interpolation.

@muratmaga Hey, Murat, thanks for the reply. Nope - I’ve used bits and pieces from the .slicerrc.py you shared with me a while back, but nothing regarding interpolation. And I haven’t changed my .slicerrc.py in weeks.

I’ll double-check it against the 9-1 nightly (I keep a few around, just in case) and let you know.

Note that using interpolation for image display is not just some trick or some arbitrary post-processing step that users may or may not want. Instead, interpolation provides (a good approximation) of the original continuous signal that is reconstructed from discrete point samples.

It is OK to temporarily disable interpolation for debugging or as a workaround for lack of image grid display during segmentation, but I don’t think it is a good idea to advertise disabling interpolation as a setting that users should use by default.

The SlicerMorph extension is turning off interpolation by default if it is an installed extension?

cc: @pieper

Yes, the most recent SlicerMorph has optional config with interpolation off, which I was told is the preference in that community. You can either toggle off all SlicerMorph customizations (in the Edit->Application Preferences->SlicerMorph pane) or you can edit the SlicerMorphRC.py that we provide.

@muratmaga What is the motivation for disabling interpolation? If it is just familiarity for the users then it may be managed by user training. If there are specific reasons then maybe they can be addressed directly, without disabling interpolation (e.g., display grid when voxel size reaches a certain threshold).

Most of our users need to use the manual segmentation as they are working on the microCT scans of dry bones in which intensity based tools don’t work well. In such cases, it is better to be able to see the voxel boundaries clearly (e.g., so that you can find the gap between the cranial sutures).

What is the benefit of interpolation in the slice view?

Interpolated image shows you the faithfully reconstructed continuous signal from the discrete samples. This reconstruction is lossless if sample-rate criterion is satisfied (see more details here). The criterion is not always satisfied - that’s when you see staircase artifacts in the reconstructed image (in both slice views, volume rendering, and reconstructed surfaces):

Interpolation is still about the best thing you can do, and definitely provides much more faithful reconstruction of the original signal than no interpolation (= nearest neighbor interpolation). Also, you can always make the sample-rate criterion satisfied by applying a low-pass filter on the data. That’s why we always apply surface smoothing when surface is reconstructed from from labelmap volume.

You should be able to see small details better when interpolation is used. For example, try to draw the dark separating line in this image when it is displayed with/without interpolation:

You need to squint your eye and guess where to draw when voxels are shown without interpolation, while it is quite straightforward on the interpolated image.

I agree that it can be useful to show the voxel grid during segmentation so that you know how fine details you can paint and where. Note that this is the grid of the binary labelmap representation of the segmentation and not the grid of the background scalar volume.

We could quite easily add this grid display option to the segmentation displayable manager (above a certain zoom factor). You can simulate it with this Python code snippet:

import SampleData
volumeNode = SampleData.SampleDataLogic().downloadCTACardio()

import numpy as np

ijkToRasVtk = vtk.vtkMatrix4x4()
volumeNode.GetIJKToRASMatrix(ijkToRasVtk)
ijkToRas = slicer.util.arrayFromVTKMatrix(ijkToRasVtk)

volumeExtent = np.array(volumeNode.GetImageData().GetExtent(), dtype=float)
volumeExtent[0] -= 0.5
volumeExtent[2] -= 0.5
volumeExtent[4] -= 0.5
volumeExtent[1] += 0.5
volumeExtent[3] += 0.5
volumeExtent[5] += 0.5

dimensions = volumeNode.GetImageData().GetDimensions()

pts = vtk.vtkPoints()
quads = vtk.vtkCellArray()

for axisIndex in range(3):
    if axisIndex == 0:
        pointsIjk = np.array([
            [volumeExtent[0], volumeExtent[2], volumeExtent[4], 1.0],
            [volumeExtent[0], volumeExtent[3], volumeExtent[4], 1.0],
            [volumeExtent[0], volumeExtent[3], volumeExtent[5], 1.0],
            [volumeExtent[0], volumeExtent[2], volumeExtent[5], 1.0]]).T
        offsetIjk = np.array([[1, 0, 0, 0], [1, 0, 0, 0], [1, 0, 0, 0], [1, 0, 0, 0]], dtype=float).T
    elif axisIndex == 1:
        pointsIjk = np.array([
            [volumeExtent[0], volumeExtent[2], volumeExtent[4], 1.0],
            [volumeExtent[1], volumeExtent[2], volumeExtent[4], 1.0],
            [volumeExtent[1], volumeExtent[2], volumeExtent[5], 1.0],
            [volumeExtent[0], volumeExtent[2], volumeExtent[5], 1.0]]).T
        offsetIjk = np.array([[0, 1, 0, 0], [0, 1, 0, 0], [0, 1, 0, 0], [0, 1, 0, 0]], dtype=float).T
    else:
        pointsIjk = np.array([
            [volumeExtent[0], volumeExtent[2], volumeExtent[4], 1.0],
            [volumeExtent[1], volumeExtent[2], volumeExtent[4], 1.0],
            [volumeExtent[1], volumeExtent[3], volumeExtent[4], 1.0],
            [volumeExtent[0], volumeExtent[3], volumeExtent[4], 1.0]]).T
        offsetIjk = np.array([[0, 0, 1, 0], [0, 0, 1, 0], [0, 0, 1, 0], [0, 0, 1, 0]], dtype=float).T
    print("ok so far")
    pointsRas = np.dot(ijkToRas, pointsIjk)
    offsetRas = np.dot(ijkToRas, offsetIjk)
    for planeIndex in range(dimensions[axisIndex]+1):
        quads.InsertNextCell(4)
        for pointIndex in range(4):
            quads.InsertCellPoint(pts.InsertNextPoint(pointsRas[0][pointIndex], pointsRas[1][pointIndex], pointsRas[2][pointIndex]))
        pointsRas += offsetRas

gridPoly = vtk.vtkPolyData()
gridPoly.SetPoints(pts)
gridPoly.SetPolys(quads)

gridModelNode = slicer.modules.models.logic().AddModel(gridPoly)
gridModelNode.GetDisplayNode().SetVisibility2D(True)
gridModelNode.GetDisplayNode().SetOpacity(0.3)
gridModelNode.GetDisplayNode().SetVisibility3D(False)

See how the grid shows more relevant information than the non-interpolated master volume, especially when resolution of segmentation differs from the resolution of the master volume:

It is interesting to note that this way of showing the grid works well for arbitrarily reformatted slices:


It would be great if you could play around with the Python code snippet above and if you find grid display useful then we can discuss about adding it to segmentation displayable manager (similarly to how it worked in Slicer3).

1 Like

Good explanation Andras.

Yes, we had the voxel grid option in Slicer3 and I sometimes found it helpful, but we ultimately never ported it because turning off interpolation was just as easy, less visual clutter, and more informative. One motivation for turning off interpolation is to understand the partial voluming that comes from thick slices, but personally I would usually leave it off.

@muratmaga - it would be possible to move this from the SlicerMorphRC.py script to the SlicerMorph Preferences as a checkbox, so that users could easily decide which mode they prefer.

Here is an example. This is the mouse molar row, showing first molar and tiny bit of the second one. If the interpolation is on, these structures appear continuous. There is a tiny gap, probably much smaller than the voxel size between then, which becomes more clear if the interpolation is off.

We often have to segment things like this. Seeing voxel boundaries makes it a bit more clear.

You may “feel” that you see something better in the pixelated image, but that may not be real. You could say that you can perfectly reconstruct the image, mentally, from the displayed samples, but that would only work in 2D images - in 3D volumes you would need to take into account voxel values in neighbor slices, which are not displayed in the slice view at all.

Choosing between nearest neighbor or higher-order interpolations is not some user preference, but nearest neighbor is objectively worse, it has much larger error in reproducing the real continuous image.

There are also practical reasons why disabling interpolation to see the image grid is not a good solution.

  1. Showing pixelated image does not work well when neighbor voxels happen to have similar intensity (you need to boost up the contrast to see voxel boundaries, but then you need to bring the contrast back if you want to interpret the image).
  2. Background image grid is not always the same as the grid of segmentation labelmap representation (see example above).

There could be some new “index mode” operation of Slicer, when everything is consistently constrained to integer positions (even markup points, lines, surface meshes, etc.), but it would be very limiting, it would only be useful for extremely low resolution or highly anisotropic images, would be a lot of work to implement, could be confusing for users, and you cannot do much 3D processing with such poorly sampled images anyway.

Another issue with nearest neighbor is that it falls apart when you rotate the sample plane.

To Murat’s point though, there are some cases where NN sampling might give you some hints about the fine detail that is otherwise smoothed out by interpolation when you are at the limits of resolution.

Consider a 1D example NN sampling of a sine wave. If it is at the right at twice the sampling frequency, you’ll get alternating positive and negative pixel values. But with linear interpolation you would get all zero values.

I’m not sure I understand this correctly. If you use nearest neighbor then you get square wave, while linear interpolation provides triangle wave (not a constant). Triangle wave is more similar to sine wave than square wave.

Sinc kernel would reproduce the sine wave perfectly at any resolution (provided the input sampling was dense enough for the signal bandwidth), but linear is good enough for most cases.

If you have dense sampling (zoomed in), yes, you will get a triangle wave, but if your pixel sampling grid is close to the voxel sample spacing but they are not aligned (e.g. panned by half a pixel) then linear interpolation will lose the peaks of a high frequency signal. So with NN you will get aliasing but may also get visual clues about fine details.

I’m not sure when it was added but vtkImageReslice supports sinc if anyone feels like trying it out. It’s probably fast enough for real time for most window sizes / CPUs.

Thanks for the suggestion, it is very interesting to compare appearance of interpolated/non-interpolated image when voxel size is approximately the same as the pixel size on the display. I created a video of a continuous zoom from smaller voxels than display pixels to much larger voxels than display pixels (and zoomed in the video 4x so that it is easier to see differences):

If image is not interpolated then there are waves running over the image as you zoom in (or pan), depending on which voxels are skipped when voxel size is similar to display pixel size. If image is interpolated then zoom is continuous, smooth. Without the 4x zoom, the waves are barely visible on a modern high-resolution screen, so in everyday usage these artifacts are not that distracting.

At the very end there is a very clearly recognizable C-shaped dark structure in the interpolated image, but it is barely visible in the non-interpolated image, probably because our eyes do not average voxel intensities very well. For me, this image demonstrates very well why we need to use interpolation for image display.