Creating .mha files

Hi,

I want to reconstruct the volume of a kidney, in real time, using a sequence of 2D Ultrasound images.
The 2D images are obtained by a probe and the position of the probe is given by the kinematics of the da vinci robot. How could I write the .mha files that are needed for the reconstruction?Should I use Python or Matlab, or Slicer can do it more easily?

Thank you for your time,
Harris

Slicer with SlicerIGT extension and Plus toolkit is very well suited for this job. They provide not just offline volume reconstruction but also real-time tracked ultrasound visualization and live volume reconstruction. Probably SlicerIGT tutorial page is a good starting point.

Specification of sequence metafiles that can be used by Plus toolkit’s ultrasound volume reconstructor is available here. There are also many tracked ultrasound sequence files that you can use as examples.

If you want to take full advantage of the platform’s real-time visualization and processing capabilities, then first, you need to get the end-effector positions from the DaVinci system as a synchronized data stream with the ultrasound image. We are helping the VISE team at Vanderbilt to get a robust and high-performance solution to this. We’ll probably revive the DaVinci interface in Plus toolkit, which will allow acquisition of tracked ultrasound data, recording to file, live volume reconstruction, and streaming to 3D Slicer for interactive visualization. I think the plan is to make all this openly available.

Next step is spatial calibration of the tracked ultrasound (determine transformation between the image coordinate system and the robot end effector’s coordinate system). This can be done using Plus toolkit’s fCal calibration system.

Inaccuracy of tool position estimation from the robot kinematics may be significant, especially when the ultrasound probe shaft bends because it gets into contact with tissue. To compensate for this error, you might want to use an external electromagnetic or optical tracker (Plus toolkit already supports this), or use endoscopic camera-based tracking (Plus toolkit already provides 2D barcode based tracking using ArUco library, which might be applicable).

If you could arrange a visit at Vanderbuilt or attend the project week in Boston then probably you could learn quite quickly what’s available already and how to use them.

1 Like

Thank you for your help and you guidance.
As I am in London I can not attend to the project week in Boston. I will send them an email and I hope they will help me.
Otherwise, do you think getting the US images and the tracked data and making the .mha or .mhd files for the volume reconstruction could be in real time?

If you write all the frames to a file then Plus volume reconstructor will need to read the file, paste all slices into a volume, then save the result to a file. This all will probably take at least 5-10 seconds (depending a lot on how many frames do you acquire and the resolution of the output volume). If you acquire ultrasound data in Plus (or stream tracked ultrasound data to Plus in real-time) then you don’t need to write input images to file, Plus does not have to read from file, reconstruction can be done while data is acquired, so you only need to wait for writing/sending the reconstructed volume (typically does not take more than a few seconds).

Regarding the second option, how can I acquire ultrasound data in Plus? As I read, in the config file I have to set to devices: one for US images and one for the tracking data. For the device with the tracking data I will only have a sequence of positions of the da vinci end effector. Is there a guidance on how to write this part of config file?

Plus will support daVinci joint encoder based tracking in about a month. If you need a solution earlier then contact the Vanderbilt group.

Thank you again for all your help.

I will to make the reconstruction firstly online. I have one sequnce of pictures (.png format) for the ultrasound images and I want to convert them to .mha file to provide them as input in the PLUS toolkit . What do you think would be the best way?
It would be better to have them in another format? I collect them via bk5000 ultrasound cart.

1 Like

You can use Plus toolkit to acquire images directly in mha format.

If you have license to use the OEM interface of your bk5000 system then you may be able to acquire images directly from the BK software (see instructions here). If your system does not support this then you can use a framegrabbers (all devices that are compatible with Microsoft Media Foundation should work, see more information here).

I have collected some data, US images (png format) and positions of the end effector and I would like to reconstruct the 3D volume offline with these data. When I collected the data I didn’t know that the apropriate format is .mha and now it is not easy to collect again new data. My question is, could I convert the image and position data into .mha file to make a simulation in PLUS toolkit?

(I tried to convert a sequence of images in .mha via ImageJ but now I can’t combine them with the position data)

That should be no problem at all. You can write the position data to a text file and copy it to the metaimage header using a text editor. If your text editor has trouble with editing a file that contains binary data then save the metaimage in mhd format (header+binary data in two separate files).

Alternatively, you can use EditSequenceFile tool in Plus toolkit (included in the installation package) to add transforms from a file to an existing sequence file, using MIX operation.

Hi,

The last 20 days I was trying to synchronize the data from the da vinci and the ultrasound images and to compute the calibration matrix between the end effector and the image. Now I would like to make the 3D reconstruction. I tried after our previous conversation to write manually the .mhd file and to make the .raw file from a sequence of images and it works.
How could I write the .mhd and .raw files using Matlab and these files to be read from Slicer3D toolkit to make the reconstruction?
I read that there is a matlab function (write_mhd) and that MatlabBridge can read files (in nrrd format). Is this a correct way to have the 3D reconstruction or it would be better to use the source code of Slicer 3D and not the toolkit?

Thank you for your time.

You can easily create mhd files in any programming language (it is just a text file) and there is a nrrd writer in MatlabBridge that you can start from.

If you used only Plus and Slicer for data collection and volume reconstruction then things should be much simpler and faster. DaVinci interface was just fixed up this week and it should work well now. Please discuss with the Vanderbilt team directly for details.

I sent to Vaderbili team but they did not reply to me.

I created the .mhd file and the coresponding .raw file and I followed the Ultrasound Volume Reconstruction in Slicer 3D tutorial using their configuration file. Despite the fact that I can see my US images moving in the scene (with Volume Reslice Driver) when I push the scout scan button in Plus remote, it doesn not creat the volume. It records frames but it does not show the volume. What could be the problem?

Thank you.

Hi,

I have two questions:

  1. In Plus Remote, when I put the scout scan button it creates a volume that is saved in a .mha file. When I import this file in Volume Rendering module is like parallel slices and the volume is repeated. (Maybe because I scanned more time ). How can I visualise the correct part of the volume?

  2. I read your manuscript about the Volume reconstruction and I would like to know if there is a library that you used for the reconstruction algorithm or if you ipmlemented it from scratch.

Thank you for your time,
Harris

@Sunderlandkyl can you advise?

@xaris_komninos have you tried using live volume reconstruction?
The scout scan is mainly used to quickly identify the region of interest for use in the live volume reconstruction, which performs a more comprehensive volume reconstruction.

The documentation on the volume reconstructor is here: http://perk-software.cs.queensu.ca/plus/doc/nightly/user/AlgorithmVolumeReconstruction.html

Recently the volume reconstruction code was migrated from Plus to the IGSIO library so that it can be used in other applications (https://github.com/IGSIO/IGSIO/)

I tried to use live volume reconstruction (after scout scan) but it doesn’t show the volume. I follow exactly the same steps as the tutorial.

I see that you have a ‘VolumeReconstructionTest’ project. Is this a way to set my configuration file , my tracked ultrasound images in order to get the volume reconstruction? This volume will be visualised or saved?

If you have a recorded sequence from Plus, you can use the recorded volume with VolumeReconstructor.exe in Plus ( essentially the same as VolumeReconstructionTest).

Can you upload your Plus log file showing the volume reconstruction?

I have only used Slicer 3D. Do you mean I have to use fCal to record the sequence and try reconstruct the volume?

I tried again in Slicer and it works.

I can see the rendered volume , but how can I export this volume? The only file that I have is the .mha file from Scout Scan which is not the volume that I want to show. Can I export it in order to use in other applications or I can only visualise the volume?

Thanks a lot for your help!!

1 Like