Working on writing a command line module for 3D slicer in C++. I am trying to convert a VTK image to an ITK image and maintain the proper coordinate systems so the images line up in 3D space. Having a slight issue that I was hoping someone could help me with. Let me outline it below:
First I read in a DICOM set using vtkDICOMImageReader as follows:
std::string dicomFolder = "D:/images";
vtkSmartPointer<vtkImageData> imageVTK = vtkSmartPointer<vtkImageData>::New();
vtkSmartPointer<vtkDICOMImageReader> DICOMreader = vtkSmartPointer<vtkDICOMImageReader>::New();
DICOMreader->SetDirectoryName(dicomFolder.c_str());
DICOMreader->Update();
imageVTK = DICOMreader->GetOutput();
Now that I have the VTK imagedata I use VTKImageToImageFilter to flip it to ITK as follows:
typedef itk::VTKImageToImageFilter< ImageType > itkToVTKFilterType;
itkToVTKFilterType::Pointer itkToVtkFilter = itkToVTKFilterType::New();
itkToVtkFilter->SetInput(imageVTK);
itkToVtkFilter->Update();
ImageType::Pointer imageITK = itkToVtkFilter->GetOutput();
Now I have the image as an itk::image. If I look at the image data, the image is flipped along Y and Z. I believe this is because of the difference between what VTK and ITK define as the origin. But I am not 100% certain. Regardless, if I flip the itkImage as follows the image data is no longer flipped:
itk::FixedArray<bool, 3> flipAxes;
flipAxes[0] = false;
flipAxes[1] = true;
flipAxes[2] = true;
typedef itk::FlipImageFilter <ImageType> FlipImageFilterType;
FlipImageFilterType::Pointer flipFilter = FlipImageFilterType::New();
flipFilter->SetInput(imageITK);
flipFilter->SetFlipAxes(flipAxes);
imageITK = flipFilter->GetOutput();
At this point I realize that the spacing information has been lost during the flip. Furthermore the origin and direction cosines need to be set from the original vtkImage.
//set image spacing
imageITK->SetSpacing(imageVTK->GetSpacing());
//set image origin
imageITK->SetOrigin(DICOMreader->GetImagePositionPatient());
//Set the direction cosines.....
float* imageOrientationPatient;
imageOrientationPatient = DICOMreader->GetImageOrientationPatient();
VectorType xVector;
xVector[0] = imageOrientationPatient[0];
xVector[1] = imageOrientationPatient[1];
xVector[2] = imageOrientationPatient[2];
VectorType yVector;
yVector[0] = imageOrientationPatient[3];
yVector[1] = imageOrientationPatient[4];
yVector[2] = imageOrientationPatient[5];
//The Z vector needs to be calculated. It is the cross product of the X and Y vectors.
VectorType zVector;
zVector[0] = (xVector[1] * yVector[2]) - (xVector[2] * yVector[1]);
zVector[1] = (xVector[2] * yVector[0]) - (xVector[0] * yVector[2]);
zVector[2] = (xVector[0] * yVector[1]) - (xVector[1] * yVector[0]);
InputImageType::DirectionType imageDirection;
imageDirection[0][0] = xVector[0];
imageDirection[0][1] = xVector[1];
imageDirection[0][2] = xVector[2];
imageDirection[1][0] = yVector[0];
imageDirection[1][1] = yVector[1];
imageDirection[1][2] = yVector[2];
imageDirection[2][0] = zVector[0];
imageDirection[2][1] = zVector[1];
imageDirection[2][2] = zVector[2];
imageITK->SetDirection(imageDirection);
Now that everything is set. I write this image out to the disk in NRRD format. Load it into slicer. My origin does not match my original image. If I call imageITK->GetOrigin() the origin is correct, but when I write it out as an NRRD file the origin gets changed. I believe there is something I am not fully grasping with the differences between VTK and ITK, how they handle image data, how they handle coordinate systems, and how they define the origin. I really want to make sure I fully understand these nuances. Can anyone help me out?
I do realize that I could just load the images in ITK and never have the images in VTK format, but for this specific module there is a reason why I need to do it this way…