3D Reconstructed Object Proportions and Element Spacing

Hello everybody! I have encountered a problem with volume proportions after 3D reconstruction. I acquire images via a Telemed US Probe and track positions using a robot arm. I scan a handmade phantom with a small 3D printed pyramid inside. The element spacing of Telemed US Probe is (0.2, 0.2, 0.2). When I load .mhd file into 3D Slicer with ElementSpacing = 0.2 0.2 0.2, I get very strange result of 3D reconstruction.

When I load the same.mhd file and change ElementSpacing to 1 1 1, the 3D reconstruction works, but I get a very squeezed pyramid.

3D Reconstruction module also has similiar parameter called output spacing, but changing it doesn’t solve the problem.
P.S. My US probe is uncalibrated, but I hope at least to get the correct proportions without calibration.

Hi, it’s difficult to comment without seeing your complete transform hierarchy. Image sequences in Slicer are usually stored without any transformation or spacing, so the units are pixels. Ultrasound tracking calibration in your case would be something like Image-to-RobotEnd. This trasnform will have a scaling of 0.2, converting from pixels to mm. It would also position and rotate the image relative to the most distal part (RobotEnd) of the robot.

1 Like

Thank you for response, I still has not been able to solve the problem. My TELEMED probe aquries images with 0.2 spacing. I have tried to do 3D reconstruction in two ways, with different Element Spacings.

  1. I load .mhd sequence file with the Element Spacing = 0.2 0.2 0.2 into 3D Slicer:
    I get the following 2d view where the US frame fills only a small part of the window:
    The region of interest has correct proprotions
    But after running 3D reconstruction, the volume isn’t rendered properly for some reason:

  2. When I load .mhd file with Element Spacing = 1 1 1:
    The US frame fills up all the space of the 2D Red window.
    But the region of interest is squeezed and 3D reconstruction works (see the result in my first post), however; the volume has incorrect proprotions.

My transforamtion hierarchy looks like this:

Could you please check if ImageToReference is changing over time when you move the sequence browser time slider, and if the values of the ImageToReference matrix make sense? You can see the matrix in the Transforms module if you select ImageToReference. The first three columns should have a norm of 0.2, and they should be orthogonal. Also check that Image does not have any extra transform in it. You can check that if you go to Volumes module and select Image (h_l_man_pyr-Image), and expand the section called “Volume Information”. The third value of dimensions should be 1, spacing should be 1 everywhere, origin should be (0,0,0), and the IJK to RAS matrix should be the identity.

If you check all these and they all look correct, then something might be wrong with the recorded data or the ImageToProbe calibration (should be part of ImageToReference). If you are recording data with PLUS, could you attach or copy the contents of your PLUS config file here?

1 Like

I checked all the points you mentioned; everything seems to be fine. I think the problem is with maths, as I generate the transformations manually in my Python code instead of obtaining them from PLUS. You mentioned that the first three columns of the transformation matrix should have a norm of 0.2. Have I understood correctly that the transformation matrix should look like in the equation below?
And the content of my PLUS config:

<PlusConfiguration version="2.1">
<DataCollection StartupDelaySec="1.0">
<DeviceSet Name="PlusServer: Telemed ultrasound device (CUSTOM)" Description="Broadcasting acquired video through OpenIGTLink"/>
<!-- SELF-CLOSING TAG, THIS IS WHAT USER SEE IN PlusLucher from drop-down menu -->
<Device Id="VideoDevice" Type="TelemedVideo">
<!--   AcquisitionRate="15" FrameSize="1920 1080" VideoFormat="YUY2" CaptureDeviceId="0" -->
<DataSource Type="Video" Id="Video" PortUsImageOrientation="UN"/>
<!-- <DataSource Type="Video" Id="Video" PortUsImageOrientation="US_IMG_ORIENT_MF" ImageType="BRIGHTNESS"/>  -->
<!-- ImageType="RGB_COLOR"  -->
<OutputChannel Id="VideoStream" VideoDataSourceId="Video"/>
<Device Id="CaptureDevice" Type="VirtualCapture" BaseFilename="RecordingTest.igs.mha" EnableCapturingOnStart="FALSE">
<InputChannel Id="VideoStream"/>
<Transform From="Image" To="Reference" Matrix=" 0.5 0 0 0 0 0.5 0 0 0 0 0.5 0 0 0 0 1"/>
<PlusOpenIGTLinkServer MaxNumberOfIgtlMessagesToSend="1" MaxTimeSpentWithProcessingMs="50" ListeningPort="18944" SendValidTransformsOnly="true" OutputChannelId="VideoStream">
<Message Type="IMAGE"/>
<Image Name="Image" EmbeddedTransformToFrame="Reference"/>

In the PLUS config file, we never define ImageToReference. Image is rigidly linked to an ultrasound probe. But Reference is typically fixed to the patient. ImageToReference represents the motion of the image relative to the patient. It is always changing, so you cannot create a constant for it in the config file. That section is for constant transforms like ImageToProbe (because the ultrasound image is not moving relative to the ultrasound probe).

You say that you generate transforms in your Python code, you are not getting it from PLUS. I’m not sure I understand how is that possible. But in that case you just send the Image from PLUS without any transforms. Does your Python code obtains the tracking transforms directly from a tracker?