Help with IJK to LPS Conversion Mismatch When Rendering CT Volume Outside Slicer

Hello Slicer Community,

I’m currently working on a custom algorithm that requires rendering a CT volume in 3D using VTK (outside of Slicer). As part of this, I need to convert the CT image from IJK space to LPS (or RAS) physical space to accurately extract surfaces and analyze spatial positions of structures.

To test this pipeline, I’ve tried using different image readers:

  • SimpleITK.ReadImage()
  • ITK readers
  • vtkDICOMImageReader / vtkMetaImageReader

In all cases, I convert the image data to vtkImageData, apply the spacing, origin, and direction matrix, and then run my 3D algorithm (e.g., Marching Cubes, surface filtering, etc.). However, the spatial positions I get are very far off from what I see in 3D Slicer, even after applying proper transforms (including direction and spacing). I verified this by comparing coordinates of fiducials or surface centers manually with Slicer’s RAS values.

I am aware that:

  • 3D Slicer uses RAS orientation (as opposed to LPS).
  • To convert from LPS to RAS, we typically negate the first two axes (X and Y).

However, even accounting for this transformation, the coordinates from my processed VTK pipeline don’t align with Slicer’s scene — they are off by tens of millimeters or more.


What I’m trying to achieve:

  • Render the CT in 3D and perform surface-based operations while keeping spatial coordinates consistent with Slicer.
  • Ensure that any sphere or point I place in VTK matches exactly with what I see in Slicer (e.g., using Markups).

Questions:

  1. Is there a reliable way to convert a CT volume (e.g., .mha, .nrrd, DICOM) into vtkImageData while preserving the full IJK → LPS → RAS transform, exactly as Slicer does internally?
  2. Can I use 3D Slicer itself to read and process the volume, then export it (or a vtkMRMLVolumeNode) into a vtkImageData with all spatial orientation handled correctly?
  3. Is there an example of how Slicer computes world coordinates from IJK for a volume? I’d like to replicate that logic in my standalone pipeline.
  4. Is there a preferred method to export a loaded volume from Slicer into .vti or .vtk with all spatial metadata preserved, so that external VTK pipelines will behave identically?

What I’ve Tried So Far:

  • Extracted direction, origin, spacing from SimpleITK and constructed a vtkTransform to apply to surface meshes.
  • Tried negating X and Y axes to convert LPS to RAS.
  • Validated center of mass / bounding box coordinates of extracted surfaces against Slicer Markups — and found major mismatches.

Any suggestions, best practices, or example scripts would be highly appreciated. I’m happy to share code snippets or sample data if needed.

Thanks in advance!

Best regards,
Khaleel Ur Rehman

I don’t think there’s any magic that we can suggest other than carefully tracking your assumptions through the entire pipeline of operations. You should be aware that historically VTK hasn’t supported oriented images or the concepts of LPS/RAS, and so it’s possible for some data to be lost depending on the filters you use. Although we are careful in Slicer to flag our assumptions about RAS and LPS when reading and writing files, not all software respects these conventions. Of course you can read the Slicer source code to see how things are managed.

An advantage of building your analysis pipelines within Slicer’s environment is that so much of this work has already been done for you and you can avoid reinventing code that’s already freely available and debugged.

It is possible that not all VTK filters take into account the image direction matrix.

Yes. ITK readers provide IJK to LPS mapping, while Slicer volume storage nodes provide the IJK to RAS mapping for most cases. If the volume’s axes are not orthogonal or spacing between slices are varying, or slices are not all parallel then probably the easiest is to reconstruct the volume into a regular grid using Slicer.

Sure, you can load the volume, then export it to a NRRD file. If you use slicer.util.exportNode instead of slicer.util.saveNode then you have the convenient option of saving in the world coordinate system, i.e., harden any acquisition transforms (to handle varying slice spacing, non-parallel slices, etc.).

See example in the script repository.

Why do you want to replicate Slicer’s logic? Have you considered using Slicer instead?

I don’t think the old .vtk format can store image directions. The XML-based .vtk file format could, but it is a complex file format that only VTK-based applications can read/write, so I would not recommend its use. Instead, you can use .nrrd format.