Working on writing a command line module for 3D slicer in C++. I am trying to convert a VTK image to an ITK image and maintain the proper coordinate systems so the images line up in 3D space. Having a slight issue that I was hoping someone could help me with. Let me outline it below:
First I read in a DICOM set using vtkDICOMImageReader as follows:
std::string dicomFolder = "D:/images"; vtkSmartPointer<vtkImageData> imageVTK = vtkSmartPointer<vtkImageData>::New(); vtkSmartPointer<vtkDICOMImageReader> DICOMreader = vtkSmartPointer<vtkDICOMImageReader>::New(); DICOMreader->SetDirectoryName(dicomFolder.c_str()); DICOMreader->Update(); imageVTK = DICOMreader->GetOutput();
Now that I have the VTK imagedata I use VTKImageToImageFilter to flip it to ITK as follows:
typedef itk::VTKImageToImageFilter< ImageType > itkToVTKFilterType; itkToVTKFilterType::Pointer itkToVtkFilter = itkToVTKFilterType::New(); itkToVtkFilter->SetInput(imageVTK); itkToVtkFilter->Update(); ImageType::Pointer imageITK = itkToVtkFilter->GetOutput();
Now I have the image as an itk::image. If I look at the image data, the image is flipped along Y and Z. I believe this is because of the difference between what VTK and ITK define as the origin. But I am not 100% certain. Regardless, if I flip the itkImage as follows the image data is no longer flipped:
itk::FixedArray<bool, 3> flipAxes; flipAxes = false; flipAxes = true; flipAxes = true; typedef itk::FlipImageFilter <ImageType> FlipImageFilterType; FlipImageFilterType::Pointer flipFilter = FlipImageFilterType::New(); flipFilter->SetInput(imageITK); flipFilter->SetFlipAxes(flipAxes); imageITK = flipFilter->GetOutput();
At this point I realize that the spacing information has been lost during the flip. Furthermore the origin and direction cosines need to be set from the original vtkImage.
//set image spacing imageITK->SetSpacing(imageVTK->GetSpacing()); //set image origin imageITK->SetOrigin(DICOMreader->GetImagePositionPatient()); //Set the direction cosines..... float* imageOrientationPatient; imageOrientationPatient = DICOMreader->GetImageOrientationPatient(); VectorType xVector; xVector = imageOrientationPatient; xVector = imageOrientationPatient; xVector = imageOrientationPatient; VectorType yVector; yVector = imageOrientationPatient; yVector = imageOrientationPatient; yVector = imageOrientationPatient; //The Z vector needs to be calculated. It is the cross product of the X and Y vectors. VectorType zVector; zVector = (xVector * yVector) - (xVector * yVector); zVector = (xVector * yVector) - (xVector * yVector); zVector = (xVector * yVector) - (xVector * yVector); InputImageType::DirectionType imageDirection; imageDirection = xVector; imageDirection = xVector; imageDirection = xVector; imageDirection = yVector; imageDirection = yVector; imageDirection = yVector; imageDirection = zVector; imageDirection = zVector; imageDirection = zVector; imageITK->SetDirection(imageDirection);
Now that everything is set. I write this image out to the disk in NRRD format. Load it into slicer. My origin does not match my original image. If I call imageITK->GetOrigin() the origin is correct, but when I write it out as an NRRD file the origin gets changed. I believe there is something I am not fully grasping with the differences between VTK and ITK, how they handle image data, how they handle coordinate systems, and how they define the origin. I really want to make sure I fully understand these nuances. Can anyone help me out?
I do realize that I could just load the images in ITK and never have the images in VTK format, but for this specific module there is a reason why I need to do it this way…