Offline US reconstruction with tracked frames and transformation matrix

I am working on the spatial calibration method for freehand ultrasound and would like verify the method by reconstructing the volume offline with the frames + tracked pose data + my derived calibration matrix.

I went through tutorials U-33 (Ultrasound volume reconstruction – offline) and realized I need a config file and a mha file for tracked frames.

My question is how to create my own config file with my 4*4 transformation matrix and how to create the mha file with the US frames in .png and the corresponding tracking data in .csv?

I am still looking for relevant info from the community and the documentation. But any experience related with creating the required xml and mha can help to facilitate my progress. Thanks in advance!

Here is a complete example, with a set of input files and corresponding command-line arguments.

You can create an mhd file by loading the png stack into 3D Slicer and saving it as a .mhd file. I would recommend using mhd instead of mha because it is easier to add metadata (the mhd text header and the binary pixel data are stored in two separate files).

1 Like

Thank you for the reply! I am trying to create the .mhd file. But do you have a link for the mentioned complete example? I believe that will be very helpful!

The example is in the Volume reconstruction tool’s documentation page: Plus applications user manual: Volume reconstructor application (VolumeReconstructor)

I learnt that .mhd file stores the text header in a sperate .raw file and save the binary pixel data in the .mhd file. But what kinds of information should be write into the text header?

For example, if I have 10 successive frames. Should I just writing a 10 by 6 position data related with each frame? Or should I add additional info such as the transformation matrix, or something else?

See the specification in Plus Toolkit User Manual. You can find lots of example files here.


I am trying to write the .mha file with the provided MatLab code. I think I am stopped at the “element spacing”. What does that mean? I find a similar attribute in the sample data, which is called PixelDimensions. Do these two both mean the actual distance between pixel and pixel? If so, how can it be a 1-by-3 vector instead of 1-by-2?

See the documentation page that I linked above:

ElementSpacing: Used by the MetaIO image file format to store the point around the image is rotated. This field is not used by Plus and kept only for compatibility with other software, as the spacing is defined per frame. Typical value is 1 1 1.

An image is always a 3D object, defined in 3D space, even if it contains just a single slice.

If someone defined the image in 2D, then the transformation from world to image space would become a projection matrix. Projection matrices are non-invertible, therefore you would lose the ability to automatically compute transforms between any two coordinate systems.


I was able to load sequence of PNG files as a 3D volume into 3D Slicer following the example video on YouTube. But how do I add the motion sequence corresponding to the image sequence and 4-by-4 calibration matrix as the metadata? I have the motion sequence in excel and the calibration matrix in .txt.

Once the metadata and the images are saved in the mha file, I believe I can finish the volume reconstruction and rendering follow the complete example of Volume reconstructor application (VolumeReconstructor). Thanks!

Best regards,

You can convert the png stack to grayscale, save as .mhd or .nhdr file format (MetaImage or NRRD files with separate text file for header and raw file for pixel data), and then add the transforms from your excel file to the header file in this format. You can find lots of example files here.