Creating .mha files

Yes I tried this. It is only a 4kB file and the when I open it in a text editor , the image part of the file is full of NULL. I tried it to load it in Slicer and to see it in volume rendering but it does not work.

Did you enable volume rendering in Slicer for the loaded volume?

  • Open “Volume Rendering” module
  • Select loaded volume
  • Click on eye icon

Yes, I did exactly this and I can not see the volume. Maybe is a problem with the .mha file?It is very small.

Could you send me your Plus log?

I am trying to upload it but it says: Sorry, the file you are trying to upload is not authorized (authorized extensions: jpg, jpeg, png, gif).My file is .txt. Can I send it to you somewhere else?

I took a look at the files you sent me. Besides one minor comment on the config file (Compound=On, should be CompoundMode=“MEAN”/“LINEAR”/etc), there does appear to be a problem with the volume reconstruction.

I noticed that Plus sends/saves the correctly reconstructed volume, and then immediately overwrites it with an empty one.

I’ll investigate more and report back.

Thank you a lot for your help.

Cheers.

OK, I committed a fix to SlicerOpenIGTLink.Could you update and try again?

The fix is available both the Stable and Nightly versions.

Do you mean to download : https://github.com/openigtlink/SlicerOpenIGTLink the module and somehow to use it from Slicer 3D?

I did not build Slicer from source. Do I have to do it in order to put the updated Slicer3dIGT module?

No, you do not need to download it.
I made the commit last night, so the fix should be available if you update your SlicerOpenIGTLink extension. (https://www.slicer.org/wiki/Documentation/Nightly/SlicerApplication/ExtensionsManager#Updating_installed_extensions)

1 Like

It works.Really thank you for your help!!!

Could I ask you some more questions?
I tried IGSIO library and and the VolumeRecontructorTest project and it reconstructs the volume.

1)How can I render this volume in the same project? I tried the renderer of VTK and it works but I was wondering if you use another way to render the volume instead of reading the .mha file . Or if you use a way , as in Slicer 3D, to render the volume at the same time with the reconstruction process.

I’m not sure I understand your question. Could you elaborate?

Sorry!

I used IGSIO library to reconstruct a volume because I want to make an application in cpp. Now I would like to render this volume ( .mha file). I found a way to do it using a class of vtk library which reads the .mha file and it renders it. But I would like to to avoid the "Reading " step. Is there a way to render the “reconstructor object”?( class vtkIGSIOVolumeReconstructor)

I want to avoid saving the reconstructed volume and reading the volume. After the reconstruction, I want directly to render it.

You could use: vtkIGSIOVolumeReconstructor::GetReconstructedVolume(vtkImageData* volume).

This should return the vtkImageData without writing it to file, so you can perform the volume rendering on that.

The SlicerIGSIO extension does this to reconstruct volume sequences.

1 Like

Thank you a lot for your help and your time!

Cheers!

Hi,

I used Aruco markers in order to track the probe. I print some from : http://chev.me/arucogen/
with dictionary 4x4. I did the data collection and I tried to detect them using the Aruco library that I found in PLUS github , but it can not detect them. I realize that it only detects classic Aruco markers and some other types. Do you know what I can do to detect aruco markers from 4x4 dictionary?

Thank you

ArUco version in Plus supports 12 different dictionaries (see details and how to choose dicitionary here). I’m not sure what kind of markers that online generator creates, but you can easily create a variety of markers using the aruco_print_marker tool bundled with Plus or just use the marker sheet that you can download from the link above.

1 Like

Do you know if ARUCO library handles stereo camera ? We have a stereo camera from the da Vinci and we would like to test if we can improve the tracking using the stereo instead of use a single camera.

Main limitation of ArUco’s tracking is inaccuracy in estimating how far is the marker from the camera, as it completely relies on measuring the marker size and so a single pixel error in marker corner estimation can lead to millimeters or centimeters of errors. With a stereo setup, you can estimate distance from triangulation instead, which should be much more accurate. Therefore, yes, I would expect that with a stereo camera you can significantly improve on ArUco’s single-camera tracking.