Anyone know how to read Analyze ObjectMap files?

@Ron Alkalay and I are trying to work with some segmentations generated by Analyze. He was able to export it in .hdr/.img format, but the pixel to world (ijkToRAS) is identity so the data doesn’t match the CT as shown below.

Anonymized data is here: object-map-debug - Google Drive in case anyone want’s to give it a try or knows any conventions about how it’s encoded.

I also tried this code: objparser/objparser.py at master · pkostandy/objparser · GitHub

but the headers don’t seem to have any geometry information either.

He has over a hundred of these, and worst case would be to just line it up manually somehow, or maybe it could be semi-automated, but we’d rather figure out how to either read this data or export it another way from Analyze where it looks like this:

image

image

Maybe you could use the exported CT data from Analize (e.g. nrrd if possible) to make a Elastix registration (or any other image registration algorithm) with the same CT image imported from the DICOM data to Slicer

Hope it helps

I haven’t touched analyze since I switched to Slicer a decade ago. but from what I recall, during import there was an option to spacify “slice orientation (or ordering perhaps)”, with options like transverse, coronal, sagittal. So basically you can import the same image stack in different orientaitons, and I believe this wasn’t written to the file header. At least this was with the microCT scan sequences we had. Don’t know about DICOM or clinical images.

Try flipping in all cardinal directions, and see if it lines up. From the picture, the segmentation seems upside down (IS axis) and facing opposite direction (AP) with respect to the volume. So perhaps it is an easy flip of axes.

Hey @pieper !

Analyze Object Maps don’t contain any orientation nor spacing information, they are designed to be paired up with an original volume. I believe they are stored in RPI orientation, but using Analyze’s left-handed coordinate system. If you flip the object map in Y or do the coordinate system conversion to map from a left-handed, RPI orientation into (what looks to be) right-handed LPS coordinate system, it should line up correctly.

Once upon a time, ITK read the ObjectMap format (I couldn’t attach code :frowning_face: ). Though the ITK Doxygen site references the code (see here, here, and here, I’m not sure if this is just doxygen cruft or not.

Edit: link rot got to Hans’ ITK Journal article, not sure where the original code might be found. I have a copy if you’d like to DM me.

This is a short snippit of how to read ObjectMap files:

        // Try to load the objectmap
        typedef signed short PixelType;
        typedef itk::Image<PixelType, 3> ImageType;
        typedef itk::Image<itk::RGBPixel<PixelType>, 3> RGBImageType;
        typedef itk::ImageFileReader<ImageType> ReaderType;
        typedef itk::AnalyzeObjectMap<ImageType, RGBImageType> ObjectMapType;

        itk::AnalyzeObjectLabelMapImageIOFactory::Pointer metaFactory = itk::AnalyzeObjectLabelMapImageIOFactory::New();
        itk::ObjectFactoryBase::RegisterFactory(metaFactory);

        logger->debug("Starting to read");
        ReaderType::Pointer Reader = ReaderType::New();
        Reader->SetFileName(this->mFilename.c_str());


...

        ObjectMapType::Pointer itkObjectMap = ObjectMapType::New();
        itkObjectMap->ImageToObjectMap(Reader->GetOutput());

...
        // get info from the first object
        itk::AnalyzeObjectEntry::Pointer original = itkObjectMap->GetObjectEntry(0);

...

        // Loop over the ITK objects and create our guys
        for (int i = 0; i < itkObjectMap->GetNumberOfObjects(); i++) {
            ObjectEntry e;
            e.Name = itkObjectMap->GetObjectEntry(i)->GetName();
            e.Red = itkObjectMap->GetObjectEntry(i)->GetEndRed();
            e.Green = itkObjectMap->GetObjectEntry(i)->GetEndGreen();
            e.Blue = itkObjectMap->GetObjectEntry(i)->GetEndBlue();
            e.Alpha = itkObjectMap->GetObjectEntry(i)->GetBlendFactor();
        }

Super helpful @blezek - let me chew on this a bit!

There are some options in ITK’s Nifti IO to control Analyze75Flavor, specially the axis (AFAIK).
Might be useful in addition to the post above (perhaps the behavior of Nifti IO has changed since 2009).

#include <itkImage.h>
#include <itkImageFileReader.h>
#include <itkImageFileWriter.h>
#include <itkNiftiImageIO.h>
#include <sstream>
#include <iostream>

int main(int argc, char ** argv)
{
	if (argc < 4)
	{
		std::cout <<
			"3 arguments are required\n"
			"    input file name (Analyze),\n"
			"    output file name (e.g. out.nrrd)\n"
			"    Analyze75Flavor (0 - 4)"
				<< std::endl;
		return 0;
	}

	using ImageTypeUC = itk::Image<unsigned char, 3>;

	std::stringstream ss(argv[3]);
	int argv3;
	ss >> argv3;

	itk::NiftiImageIO::Pointer io = itk::NiftiImageIO::New();
	itk::ImageFileReader<ImageTypeUC>::Pointer reader =	itk::ImageFileReader<ImageTypeUC>::New();
	io->SetLegacyAnalyze75Mode(static_cast<itk::NiftiImageIOEnums::Analyze75Flavor>(argv3));
	const int analyse75Flavor = static_cast<int>(io->GetLegacyAnalyze75Mode());
	switch (analyse75Flavor)
	{
	case 0:
		std::cout << "Analyze75Flavor::AnalyzeReject" << std::endl;
		break;
	case 1:
		std::cout << "Analyze75Flavor::AnalyzeITK4Warning" << std::endl;
		break;
	case 2:
		std::cout << "Analyze75Flavor::AnalyzeSPM" << std::endl;
		break;
	case 3:
		std::cout << "Analyze75Flavor::AnalyzeFSL" << std::endl;
		break;
	case 4:
		std::cout << "Analyze75Flavor::AnalyzeITK4" << std::endl;
		break;
	default:
		std::cout << "Analyze75Flavor unknown" << std::endl;
		break;
	}
	reader->SetImageIO(io);
	reader->SetFileName(argv[1]);
	try
	{
		reader->Update();
	}
	catch (const itk::ExceptionObject & e)
	{
		std::cout << e.GetDescription() << std::endl;
		return 1;
	}
	
	itk::ImageFileWriter<ImageTypeUC>::Pointer writer =	itk::ImageFileWriter<ImageTypeUC>::New();
	writer->SetFileName(argv[2]);
	try
	{
		writer->SetInput(reader->GetOutput());
		writer->Update();
	}
	catch (const itk::ExceptionObject & e)
	{
		std::cout << e.GetDescription() << std::endl;
		return 1;
	}

	return 0;
}

Thanks very much for the advice everyone. :+1:

Based on the clues given here, we were able to go back to Analyze and export both the CT image and the object map as Analyze 7.5 .hdr/.img files with the same pixel dimensions. Loading those in Slicer, the A/P direction was flipped and the label volume had identity IJKToRAS transform. Flipping in A/P and copying over the matrix with the python code below gave this image:

ctPath = "/opt/data/SlicerSpine/Muscle data/Muscle data/Muscle data/E15558S301.hdr"
objPath = "/opt/data/SlicerSpine/Muscle data/Muscle data/15558_ObjectMap.hdr"

ctNode = slicer.util.loadVolume(ctPath)
labelNode = slicer.util.loadLabelVolume(objPath)

ijkToRAS = vtk.vtkMatrix4x4()
ctNode.GetIJKToRASMatrix(ijkToRAS)
ijkToRAS.SetElement(1,1, -1 * ijkToRAS.GetElement(1,1))
ctNode.SetIJKToRASMatrix(ijkToRAS)
labelNode.SetIJKToRASMatrix(ijkToRAS)

Converting to a segmentation and using Fill between Slices gave this result, which looks promising. We have over 100 of these so we hope to be able to train a MONAI Label model to segment these muscles.

I’ll also note that while the .hrd / .img files worked in the end, it looked like the same pixel data could be extracted using the objparser python code, but the header doesn’t include the pixel geometry so you need the .hdr of the source image for the object map to be of use (probably the ITK code could also work to get the pixels, but I didn’t have a chance to try). The object map file does include all the label identifiers so ultimately I will be using that information to assign the proper names to the muscles.

3 Likes