3D Volume Reconstruction from 2D Ultrasound Slices

Operating system: Windows
Slicer version: 5.8.1
Expected behavior: volume reconstruction and rendering
Actual behavior: crash
Hi everyone,

I’m quite new to 3D Slicer and this community. I’m trying to reconstruct a 3D volume from a sequence of 2D ultrasound slices. For each slice, I have the corresponding position and orientation as a 4x4 transformation matrix.

I’m currently using the Volume Reconstruction module from the SlicerIGT extension. I started from the code example provided in this forum thread, replacing the synthetic rotations with my actual 4x4 transform matrices. However, my code keeps crashing, and I suspect I’m not setting up the nodes correctly.

At the moment, I’m using an .mha volume as input, but I also have the original stack as individual .png files.

My main questions

Is it a good idea to start from the example linked above? Or would it be better to build everything from scratch?
Could someone help me understand how to correctly set up the transform and image nodes for volume reconstruction?

Any guidance or sample code would be greatly appreciated!

Thank you,

Volume reconstruction in SlicerIGT is very solid. The only known way it can crash is that if you run out of memory. You can run out of memory if you set a too small image spacing and/or set the reconstructed region too large. You can increase the virtual or physical memory in your computer, increase spacing, or reduce reconstructed region to avoid running out of memory.

I am looking for a user guide for 3D Ultrasound Volume reconstruction from pre-stored 2D NiFTi slices using SlicerIGT in 3D Slicer 5.10. I did load the sliced using AddData, and then invoking Volume Reconstruction from SlicerIGT. But the “Start” button for reconstruction remained grayed.

The Volume Reconstruction module currently only works if the recorded data is saved in Slicer Sequences. To make it work with your data, you would need to write a python script that creates a sequence browser node, adds sequences to the browser for the image node and for the tracking (transform node), and loads your data in those containers.

Can you please elaborate or suggest links which could help me understand what it means and how to go about it? Thanks.

You just need to specify for an LLM what format your data is currently in, and what format you would like to convert to. Most coding models or even ChatGPT creates (almost always) correct code to questions like “Write a python script for 3D Slicer that takes a 3D volume node and creates a sequence (time series) of 2D nodes from it.”
I would start with simple examples, maybe only do the images or the transformations first.
If you are not familiar with Slicer coding terminology, you can start the conversation by asking an LLM to explain the relevant terms.

I think it’s well worth doing the basic tutorials without AI help first, so you understand the data structure (MRML) of Slicer and how they are visualized: 3D Slicer Training Compendium | 3D Slicer

This is a good resource if you need script examples to learn from or provide them as context along with your question to your coding LLM: Script repository — 3D Slicer documentation