IGT Volume Reslice Driver - saving images

I use the Endoscopy module to define a “flythrough” path and the IGT->VolumeResliceDriver to generate resliced images along the path, typically showing them on the Axial plane. This works fine and I am able to generate a flythrough. Is there a way to save the resliced images along with the transform associated with each resliced image? I would like to able to stack the resliced images together to visualize them and also have the ability to transform them back to the original coordinate system.

I am able to screen capture the images themselves during the flythrough - but I am interested in the capturing the underlying voxel data - before it has been mapped to RGB for displaying on the screen.

1 Like

Here is a complete script that saves the resliced images (and even projects them into a single 2D image to show X-ray or MIP view):

You can also save each sliceToWorldTransform in an array and use them to transform back from the warped space to the original image space.

What is your end goal? Dental panoramic X-ray, vessel analysis, …?


I’m interested in doing this for my dataset as well. I’m new to Slicer, so can someone explain what the process is to run a python script within it?

Can we also save the transformation so we can do the inverse operation on a different dataset? (let’s say we create segmentation labels for the formatted images and want to reformat them back to the original coordinates).

Press “Ctrl-3” and copy-paste the script to the console. See more information here: Documentation/Nightly/Training - Slicer Wiki

Yes, you can save the transforms in a Python list. It is a vtkTransform object, so you can easily get the inverse transform (transform.GetInverse()).

To transform an entire 3D object, you would probably want to have a single non-linear warping transform. You should be able to construct it by putting original and reformatted slice corners as control points in a grid or bspline transform. I think @pieper implemented something like this.

1 Like

Yes, this code makes a grid transform based on two sets of slice corners:

Note that in general this is not invertible.

1 Like

Thanks. I understand this needs the 4.11 Preview release of Slicer as “curveNode” is not defined in 4.10. I ran it and it appears to work. I will likely have follow-up questions. For now, I just wanted to share the information with the community.

Thanks. I understand this needs the 4.11 Preview release of Slicer as “curveNode” is not defined in 4.10. I ran it and it appears to work. I will likely have follow-up questions. For now, I just wanted to share the information with the community.

1 Like

How do I set the size of each slice that is generated to a fixed size like 512x512? I would like the straightened volume to have a size of 512x512x(#samplePointsAlongCurve).

Image is resliced using the vtkImageReslice filter built into the slice view pipeline, therefore the output image will match the slice view size. If this is not suitable for you then you need to instantiate a vtkImageReslice filter and set the inputs similarly as it is done here.

Can I use a similar approach with the Endoscopy module through Python to capture the underlying image data? I have done this but with a screen capture that gives me RGB data instead of the underlying voxel data. It appears that CurveNode does not sample the curve at equidistant intervals which would be desirable. I am guessing (or hoping) that the Endoscopy module’s “Create path” option does indeed produce sample points that are uniformly sampled by distance along the curve.

Yes, the endoscopy path points are equally spaced, but it’s not integrated with the rest of the infrastructure describe here, so some programming would be required to make use of it.

Since I wrote the example, an equidistant resampling functions have been implemented (see here).

We should update Endoscopy module to use curve node (and probably also rename it to something more generic, as it is usable for many more things than just virtual endoscopy).

1 Like

Agreed - it was the very first python scripted module!

What is the X,Y resolution (I mean mm/pixel) of the straightened volume in your sample code? Do they match the X,Y resolution of the input volume? So if my input volume had X,Y image spacing of 0.3 mm each, would the “axial” slices of the straightened volume have this same image spacing? This would mean I could make measurements on the “axial” slices and they would be correct.

The resolution is just whatever is set in the slice viewer (what is the resolution of your monitor, how large is the slice view, and how much you zoom in). It has nothing to do with the volume’s spacing. If you want to control the resolution then you can use a separate vtkImageReslice filter.

1 Like

In your code, you are doing reslicing using:
reslice = sliceLayerLogic.GetReslice()

tempSlice = vtk.vtkImageData()

My question has to do with this tempSlice. What is the pixel resolution of this vtiImage? I presume it has a resolution in mm/pixel that takes into the factors you mentioned - the size of the window, zoom etc.

To elaborate, let’s say I set my curveNode to be a straight line passing right through the middle of the volume along the Z-axis. So when I now reslice along the curveNode, I basically generate images that are the same as the Axial slices of the input volume (possibly zoomed in or out). So is there a notion of a pixel scale on these resliced images obtained this way? Can I make ruler measurements on one of these resliced images between two targets and get the same value that I would get on the same Axial slice from the input volume?


You can get the field of view size (in mm) from the slice node. Pixel spacing is the field of view (in mm) divided by the size of the image (in pixels).

1 Like