New origin after resampling

Hello,

I would like to ask about 3D slicer coordinate system. I understand from Coordinate systems — 3D Slicer documentation that the origin is in the middle of the first (corner) voxel. However, after resampling, the origin stays the same, which I do not understand. If the origin is indeed the center of the corner voxel (and not the corner of the corner voxel), then the origin of the resampled volume should be new_origin = original_origin - original_spacing/2 + new_spacing/2 to be in the center of the new corner voxel. Am I missing something?
This behavior was observed by doing the following:

  • Importing a dcm series of CT scan in 3D slicer
  • Exporting this series a nrrd and observing the origin by opening the volume in python with sitk
  • go back to 3D slicer, resmaple the volume to a new spacing with “Resample Scalar Volume” linear interpolation
  • Exporting the new series to nrrd and observing in python with sitk that the spacing changed but the origin did not.

Thank you for your help.
Best regards,
Sebastien

Each voxel stores a sample of a continuous signal at the position of the voxel center. There is no need to change the origin of the sampling grid if you resample an image.

Is the origin used to place the volume in space? If not, what is used to place the volume? For my application I need to retain the image original location. And to my current understanding I need to shift the origin when resampling in order to retain the image original location. Please see the attached pictures to understand my reasoning. Let’s say we have a 3mm by 3mm by 3mm volume (represented in blue in the graph) and then we resample it to 1mm by 1mm by 1mm (represented in red). Don’t we need to modify the origin in order for the resampled volume and the original volume to overlap perfectly?

Thank you for your support.

1 Like

No, there is no need to shift the origin to retain the location of the image. Maybe you think that the image starts at the corner of the first voxel and ends at the opposite corner of the last voxel, but this is not true. All that we know are samples of a continuous signal at grid point positions (voxel centers). Where the image appears to start and end on screen is just a matter of display choice.

It is reasonable to display the image as if it started a half voxel beyond the first and last sampling positions to give a hint to the user about voxel size at that position. However, it is actually an arbitrary decision. For nearest neighbor interpolation, showing the image starting half voxel before the first sample is justifiable; for linear interpolation, you would need to make some assumptions if you want to extrapolate, so some image viewers choose to not display anything beyond the centers of first and last voxels; for higher order interpolation you start to have exact information from the second or third voxel, and so it is common to pad the image (e.g., using mirroring) so that you can display something near the first voxel.

1 Like

@sebquetin I agree with your premise that if you treat the “edge” of the volume as being a 1/2 pixel spacing box around the origin then you would want to pick an origin that takes into account the new pixel spacing so that edge remains in the same physical location. There are several resampling options in Slicer, and Resample Scalar Volume is the simplest and comes from an old ITK example that apparently didn’t offer that behavior. As @lassoan points out, the sampling grid is kind of arbitrary so as long as the pixel intensity values don’t shift in physical space this won’t matter for most purposes. But it’s weird and maybe should be fixed if someone has time to investigate.

If you want to account for the spacing in the resampling, you can define your own sampling grid (i.e. a volume with specified origin, dimensions, directions, and spacings) and give it as the reference for one of the other more sophisticated resampling modules, such as Resample Image (BRAINS) or Resample Scalar/Vector/DWI. Also I believe the Crop Volume module correctly offsets the origin when you specify the ROI and spacing, as opposed to the others where you provide the sampling grid explicitly.

1 Like

Good point, Slicer modules already performs this shift when the user sets an output image region explicitly. For example, in Crop volumes module (user provides an ROI node and image spacing is modified via spacing scale parameter), and in Segmentations module (user provides region via a reference image and image spacing is modified via oversampling factor parameter).

I don’t think shifting the origin outside of the original voxel centers is a good default strategy in general, because that would burn in more of the extrapolated voxel values into the output image. This may not be desirable, because depending on the chosen interpolator and the extrapolation strategy, the result may be quite inaccurate. For example, you can test this by resampling an image using Crop volume module, using Fit to volume, and choosing Windowed Sinc interpolator:

Anyway, it is good that all these options are already available in Slicer, so users can choose what works best for them.

1 Like

Right, I don’t see a use case for picking an origin outside the original image volume, but adjusting it so that the “edge” of the volume is stationary does make sense.

What we are discussing here (resampling an image to finer resolution while preserving the location of the image edges) is a use case. You could argue that since the new origin is still inside the “edges” of the original volume, we could still consider that the origin is inside the volume. However, this new origin is outside the bounding box of voxel centers of the original volume, which means that the voxel value at the new origin (and all new boundary voxels) are computed by extrapolation, therefore I would consider this origin to be picked outside of the original image volume.

People are aware that the image is interpolated when it is resampled but they may not be aware that if they also want to preserve the location of the image edges then the image has to be extrapolated as well, which may introduce additional error.

1 Like

It’s not a huge issue, but my argument is that if you supersample the image, then the origin will shift ouside the bounding box of the original voxel centers, but it should shift exactly the amount so that it is 1/2 a spacing away from the same edges that were 1/2 spacing away from the original origin. If you subsample, then the origin moves inside but still maintains the 1/2 spacing from the edge criteria.

The question was also posted and discussed on the ITK forum:

I understand that my usecase is specific. I want to have all my volumes perfectly overlapping and for me half a mm shift is important and that is why I am asking those questions. I also understand that for pixels/voxels outside the “voxel centers bounding box”, values will be extrapolated instead of interpolated and that this leads to an accuracy on position VS accuracy on voxel values trade-off.
Now, there is still something I want to clarify.

Please see the two graphs, where there are two cases, one image has been resampled with shifting of the origin and the other has been resamped without shifting of the origin. If we consider any value outside the “voxel centers bounding box” as being extrapolated; between the two problems, don’t we just move the extrapolation problem elsewhere? And isn’t it worse to have multiple adjacent voxels interpolated?

Please let me know if I am going in the wrong direction here.

Your volumes are not overlapping any better, regardless if you shift the origin or not: if you resample the volume it cannot perfectly overlap anymore with the original volume. In some image viewers it may appear that the images accurately overlap if you shift the origin; but in other image viewers (that don’t pad the image outside the voxel centers) the images appear to accurately overlap if you don’t shift the origin.

If you prefer to preserve the signal most accurately then not moving the origin is slightly better choice (you avoid burning in extrapolated data into the output image).

If you are not particularly concerned about introducing very subtle errors at the image boundary, and you want the resampled image edges appear at the same location as of the original image in some image viewers then you may decide to move the origin.

Displacement of the image intensity signal would be a huge issue, but that’s not happening here in either case (with or without shift of the origin). You don’t need to worry about that.

No, you don’t have extrapolation elsewhere if you don’t shift the origin. The dashed region in the second image of your post above is not part of the output image.

There are different, equally valid, metaphors for what the voxel values of an image might represent. If your mental model is a continuous field which is sampled pointwise at the voxel centers, then it makes sense to consider that the “boundary” of the image does not extend beyond the outermost voxel centers. On the other hand, if you imagine a CCD sensor (like on a digital camera), the voxel value represents something more like an integration of a signal over an area. In that case, it makes sense to consider that the “boundary” of the image extends all the way to the edge of the voxel because the data summarized by that voxel value is gathered from that whole area. In medical imaging, I think the truth is typically somewhere in between these two mental models. We can’t actually sample anything at an infinitesimal point, but it also isn’t the case that a reconstruction algorithm really ends up representing a uniform data sampling across a full rectangular voxel out to the edges and no farther. Instead, we get voxel values which represent something about the content of some signal that we attribute to a local spatial region. If we say that local spatial region is more like a point, then we get the interpretation of @lassoan. If we say that local spatial region is more like an entire voxel, then we get the interpretation of @pieper and @sebquetin , where the edge of the image is the edge of the voxel.

The argument of @sebquetin of just shifting the extrapolation comes from the assumption that the image should not change size in resampling voxel spacing from 3 mm to 1 mm voxels. If an image is 10 voxels across and the voxel spacing (most people would say the voxel “size”) is 3 mm, then many people would say that the image is 30 mm across. However, under @lassoan 's interpretation, it is only 27 mm across (the distance between the first voxel point and the last voxel point). If we resample to 1 mm voxel spacing, @lassoan is saying that the resulting resampled image would still be 27 mm across, and would now have 28 samples in this dimension, rather than 10 samples in the original. This would involve no extrapolation, only interpolation. But 28 voxels which are 1 mm across would mean the image “size” in this dimension is now 28 mm rather than the original 30 mm. I think this would be a highly unexpected and unintuitive result for many people (“why should the image size, voxel size x number of voxels, change when resampling from 3mm to 1mm spacing, which divides perfectly?”). If the origin was not moved, and the dimension of the image was maintained at 30 mm, that would require the additional extrapolated voxels which @sebquetin drew in.

I am not arguing that one of these models is “right”. I am trying to help clarify what I see as two different ways of looking at this and say that each interpretation leads to what seems like an unintuitive and undesirable characteristic under the other interpretation:

  • Under the voxels-are-point-samples-of-a-continuous-field interpretation, resampling naturally leads to what looks like a change in image size under the other interpretation (“why don’t the image edges line up after resampling?”)
  • Under the voxels-are-volumetric-samples-summarized-by-one-number interpretation, it seems clearly necessary to move the image origin if we want to change the voxel size and want to keep the image edges in the same place. Filling in the voxel values around the edge in this case does not feel like extrapolation, because we consider that these are regions we sampled from when gathering the original data. Yes, we do not have finer-grained data to allow distinguishing from the big voxel value, and we do not have a neighboring voxel to interpolate between, but it doesn’t feel like it is extrapolating to just say that the voxel value there is best approximated by the original big voxel value. So, around the edge we are doing nearest neighbor interpolation. Under the other interpretation, this origin-shifting seems entirely unnecessary, and the voxel values around the edge feel like non-intuitive extrapolation (“why extrapolate outside the data, but only for a defined little distance?”).
1 Like

Thank you @mikebind for clarifying the two different point of views.

I don’t understand how the dashed region would not be part of the output image. Could you please clarify?

I followed the process mentioned in my original post and I am showing some metadata of the two volumes. Please see the above screenshot which shows that the extent/“size” of the volume is the same before and after resampling.

While there can be many metaphors that can approximately explain what a pixel may represent, they are not equally valid. Only the “voxels-are-point-samples-of-a-continuous-field” is valid, it is the one that is universally used in digital image processing. For more formal description and detailed explanation, you can read the first 4 chapters of the classic Digital Image Processing textbook by Gonzalez & Woods.

Of course, you can fill up a matrix with any values you like and call it an image, but then you have to accept that many image processing methods may work slightly incorrectly on your data. For example, you can compute a thick slab reconstruction with mean blending mode and say that in this image “voxels-are-volumetric-samples-summarized-by-one-number”. However, all image processing algorithms still assume that “voxels-are-point-samples-of-a-continuous-field interpretation”, so they will just slightly misinterpret your data and may produce somewhat inaccurate results.

While each CCD sensor element integrates inputs in a certain neighborhood, the output image voxels usually still contain discrete point sample values (and not average in the neighborhood). The point samples are estimated by applying deblurring and denoising algorithms on the raw signal data.

If you want to avoid extrapolation then only those voxels can be part of the output image, which have their center inside the bounding box of the original image voxel centers.