Loading VivoQuant .rmha images into slicer (some rough code provided)

Hello,

I adapted some nifty matlab code to read .rmha images into 3d slicer. As I understand, these are VivoQuant labelled ROI images.

The original code was written by Joseph Dalton Brook, which he made available in his Ph.D Thesis. Thanks mate! Full disclosure, ChatGPT did the grunt work as I don’t speak matlab very well. I’m amazed the code works at all but here we are. :smiley:

And the metadata I was able to extract for one of our .rmha files:

print(dic)
{'ObjectType': 'ROI', 'NDims': 3, 'BinaryData': True, 'BinaryDataByteOrderMSB': False, 'CompressedData': 'RLE', 'TransformMatrix': [-1, 0, 0, 0, -1, 0, 0, 0, -1], 'Offset': [0, 0, 0], 'CenterOfRotation': [0, 0, 0], 'AnatomicalOrientation': 'LPS', 'ElementSpacing': [0.020446, 0.020446, 0.020446], 'DimSize': [490, 490, 587], 'ElementType': <class 'numpy.uint8'>, 'ROI[1]': 'Vis:red:1:0:127:255', 'ROI[2]': 'Upper_Bone:magenta:1:0:127:255', 'ROI[3]': 'Lower_Bone:green:1:0:127:255', 'ROI[4]': 'Implant:blue:1:0:127:255', 'PatientsName': '', 'PatientID': '', 'StudyDescription': '', 'StudyDate': '', 'StudyTime': '', 'SeriesDescription': '', 'SeriesDate': '', 'SeriesTime': '', 'ReferenceUID': '', 'ElementDataFile': 'LOCAL'}

The 3D slicer extension is very basic and simply loads the rmha file as a volume. It works, but there are a few things I still need to look at:

  • I haven’t used the named colours provided in the metadata to the different labels yet, but I’m pretty sure I can use the webcolors module for that. I’ll have a look at how other modules might apply a discrete look-up table to the volumetric data but any guidance here will be welcome.

  • I’m also discovering that whereas the original image (mha or mhd) may well have an origin offset, the rmha’s offset is always 0. So I still need to think about what to do. Do I check if in the rmha file’s directory, there’s a mha/mhd file from which I can copy the offset?

  • I’ve been reading up on past discussions on the volume orientation issues here and here… I’m still digesting the info but I understand this is something I need to take into account to match the orientation of the rmha to that of the mha / mhd supporting volume. I have a 'TransformMatrix': [-1, 0, 0, 0, -1, 0, 0, 0, -1] and a 'AnatomicalOrientation': 'LPS'.

So that’s where I’m at, and I have some very rough code I can share if anyone is interested:

I’ll check some other extensions for code I can borrow and if I make any progress I’ll report back! :slight_smile:

Kind regards,
Egor

1 Like

Nice work thank you for your contribution!

Is .rmha file a standard .mha file with some additional custom fields? In that case, you may use the ITK reader for loading the image data itself and you don’t need to worry about image orientation. You may be able to get custom fields from the ITK reader as well.

It would be important to provide at least one pair of image and corresponding segmentation for testing.

Webcolors would probably work but the safest would be to check the .rmha specification for a complete list of colors (and maybe their RGB value).

Dear Andras,

I don’t think the metadata is all that different from that of a .mhd or .mha file (when looking at vtkMetaImageReader.cxx versus the one in the dictionary I provided, but the RLE compression of the data is something unique to the .rmha files and not handled by the VTK reader.

All in all, there isn’t a lot of information available on the .rmha format. It’s mentioned in a couple of place in the VivoQuant manual and referred to as “VQ 3D ROI (.rmha)”.

The only source code I was able to track down was that of Joseph Dalton Brook in his Ph.D Thesis (page 133 onwards and RLE decompression code on page 197).

I’ll check with my colleagues if we can make a set of microct images available (.mhd or .mha and the vivoquant .rmha corresponding file). I’ll also check if the colours in webcolors module match the ones in VivoQuant while I’m at it.

Cheers,
Egor

It seems that .rmha is probably a MetaImage volume with their custom tags for ROIs which Slicer calls segmentations. It is similar to .seg.nrrd which is a NRRD volume with some custom tags for a Slicer segmentation object. So a file reader for rmha should probably be implemented similar to the seg.nrrd file reader.

1 Like

Hey @jamesobutler,

thanks for the clarification. Yes, segmentations!

The closest extension I found (in Python) that does something similar to what I want is ImportOsirixROI (Import Osirix ROI: Load Osirix ROI files as segmentation).

I will write a new extension based on it and see where it takes me.

Just wanted to add, I’m not disregarding your advice about the .seg.nrrd files, but so far I only found slicerio which doesn’t seem to be a loader the way ImportOsirixROI is.

Cheers,
Egor

@jamesobutler I see why you suggested looking at .seg.nrrd, the documentation on this topic is quite thorough.

3D volumes in NRRD (.nrrd or .nhdr) and Nifti (.nii or .nii.gz) file formats can be directly loaded as segmentation:

  • Drag-and-drop the volume file to the application window (or use menu: File / Add Data, then select the file)
  • In Description column choose Segmentation
  • Optional: if a color table (specifying name and color for each label value) is available then load that first into the application and then select it as Color node in the Options section. Specification of the color table file format is available here.
  • Click OK

Tip: To avoid the need to always manually select Segmentation, save the .nrrd file using the .seg.nrrd file extension. It makes Slicer load the image as a segmentation by default.

And further down, I see

Other image file formats can be loaded as labelmap volume and then converted to segmentation

OK, so the rmha files are in fact labelmap volumes, with some specific metadata to indicate names and colours. Sorry, I’m still wrapping my head around specific terminologies and corresponding code! :blush:

Cheers,
Egor

1 Like

Yes, the rmha reader should work similarly to the .seg.nrrd reader in the sense that it should read labelmap volume files into a segmentation node, because you can attach more metadata (segment bames and colors), visualize it more nicely, and directly edit it in Segment Editor. Implementation can be importing the labelmap node into a segmentation after the reading is completed. You can create a color node to specify segment names and colors and use that during labelmap to segmentation import; or you can import without a color node and fix up segment names and colors after the import.

1 Like