Creating .mha files

I did not build Slicer from source. Do I have to do it in order to put the updated Slicer3dIGT module?

No, you do not need to download it.
I made the commit last night, so the fix should be available if you update your SlicerOpenIGTLink extension. (https://www.slicer.org/wiki/Documentation/Nightly/SlicerApplication/ExtensionsManager#Updating_installed_extensions)

1 Like

It works.Really thank you for your help!!!

Could I ask you some more questions?
I tried IGSIO library and and the VolumeRecontructorTest project and it reconstructs the volume.

1)How can I render this volume in the same project? I tried the renderer of VTK and it works but I was wondering if you use another way to render the volume instead of reading the .mha file . Or if you use a way , as in Slicer 3D, to render the volume at the same time with the reconstruction process.

I’m not sure I understand your question. Could you elaborate?

Sorry!

I used IGSIO library to reconstruct a volume because I want to make an application in cpp. Now I would like to render this volume ( .mha file). I found a way to do it using a class of vtk library which reads the .mha file and it renders it. But I would like to to avoid the "Reading " step. Is there a way to render the “reconstructor object”?( class vtkIGSIOVolumeReconstructor)

I want to avoid saving the reconstructed volume and reading the volume. After the reconstruction, I want directly to render it.

You could use: vtkIGSIOVolumeReconstructor::GetReconstructedVolume(vtkImageData* volume).

This should return the vtkImageData without writing it to file, so you can perform the volume rendering on that.

The SlicerIGSIO extension does this to reconstruct volume sequences.

1 Like

Thank you a lot for your help and your time!

Cheers!

Hi,

I used Aruco markers in order to track the probe. I print some from : http://chev.me/arucogen/
with dictionary 4x4. I did the data collection and I tried to detect them using the Aruco library that I found in PLUS github , but it can not detect them. I realize that it only detects classic Aruco markers and some other types. Do you know what I can do to detect aruco markers from 4x4 dictionary?

Thank you

ArUco version in Plus supports 12 different dictionaries (see details and how to choose dicitionary here). I’m not sure what kind of markers that online generator creates, but you can easily create a variety of markers using the aruco_print_marker tool bundled with Plus or just use the marker sheet that you can download from the link above.

1 Like

Do you know if ARUCO library handles stereo camera ? We have a stereo camera from the da Vinci and we would like to test if we can improve the tracking using the stereo instead of use a single camera.

Main limitation of ArUco’s tracking is inaccuracy in estimating how far is the marker from the camera, as it completely relies on measuring the marker size and so a single pixel error in marker corner estimation can lead to millimeters or centimeters of errors. With a stereo setup, you can estimate distance from triangulation instead, which should be much more accurate. Therefore, yes, I would expect that with a stereo camera you can significantly improve on ArUco’s single-camera tracking.

Which team in Vanderbilt?

Bob Webster’s group works on developing ultrasound guidance for daVinci robot using Slicer/SlicerIGT .

Hi,

I have 2 questions after working on IGSIO and Slicer 3D.

  1. Does the Slicer3D do an automatic scaling? From the tracker we obtain the position of the probe in meters and despite the fact that I multiply by 1000 to convert them in mm (to render the volume in Slicer), the volume appears to another part of the scene without being scaled .

  2. I tried to set as ImageToProbe matrix the identity matrix but I am getting an error . I realize that the error is because it can not compute the ouput extent . Does Slicer have any limit on this ?

  1. Where are you setting the dimensions and spacing of the image?
  2. What error are you seeing?
  1. In the VolumeReconstructorTest.cxx , I set for each frame the “ProbeToReference” matrix and the “ImageToProbe” is set reading the configuration file. I calculate the ProbeToReference by the tracker and it is computed in meters. So I multiply only the translation part of this matrix with 1000.
    The dimensions and the spacing of the image are read in the configuration file.

2.I am not getting an error. It crashes after the oupout: " Set Volume Output extent" .
As I can not use the identity matrix for the ImageToProbe , I use one matrix of your examples and it works. But it is not the transformation that I want.

Thi is the error.

I think I solved the 2nd problem. I changed the “Ouptut Space” from “0.5 0.5 0.5” to “1 1 1” and it is working.

Could you help me to the question 1?

Do you know the relation between mm and pixels in slicer3D?

https://www.slicer.org/wiki/Coordinate_systems