Issues with generating a model from ultrasound quasi-parallel slices

Hello all,

Apologies if this something answered in the past, I have been digging old forum threads and scratching my head for quite some time now.
I am currently trying to render a 3D surface from a series of(unfortunately) relatively low resolution deep ultrasound images.
I am currently able to:

  • import as a volume the image series, setting spacing relatively correctly
  • go into volume rendering, display the volume, crop it properly

A side issue at this point is that with the normal “composite with shading” technique, all I see is what appears to be a completly opaque volume, regardless of what volume properties I set. However going to “maximum intensity projection” there does allow me to see in space the elements of the volume(though in a particularly fuzzy way). Is this something expected?

I then thought of using modelmaker/greyscale model maker, but both of these will not let me choose any volume in “input volume”.

Thoughts on both questions?



Ultrasound images of real tissues are usually so noisy that 3D visualization requires segmentation. The only exception is when you image fluid-filled cavities (such as cardiac images) or phantom images.

See for example live 3D ultrasound volume reconstruction and visualization of spine phantom in Slicer using SlicerIGT extension (

Note that SlicerIGT provides real-time tracked 2D ultrasound display in Slicer and can reconstruct volumes from arbitrarily oriented and spaced images.

If you tell us more about your application (what anatomy you would like to visualize, for what purpose) then we can advise how to implement it in Slicer.

Thanks for the answer!
I had been looking at SlicerIGT but as I do not have tracking equipment and only access to recorded imagery, this makes it more difficult. In this case this is a liquid filled cavity(intra-uterine trans abdominal scans during early pregnancy). The purpose is to create an understandable surface from a convex probe scan that was done from a single point with a constant angle motion. As this is trans-adominal but with “lower” grade (older generation) hardware, there is no tracking and only the ability to acquire image sequences.

Fetus in uterus should be possible to visualize without segmentation.

For $3000 you can get a good quality optical tracker (Optitrack Duo), but if you don’t want to invest that much then you can also use an orientation sensor (PhidgetSpatial, $150; provides orientation with sub-degree accuracy), or just attach a 2D barcode on your probe and track it using a webcam:

All these options are supported by SlicerIGT/Plus, no programming is needed.

Thank you very much for the detailed information. I will be looking into acquiring one of these trackers, the PhidgetSpatial actually does look like it would bring quite decent accuracy, and proper SlicerIGT support.

Pending acquiring and testing this hardware, is there any workflows for doing grayscale image volume->point cloud->surface (possibly marching cubes, but anything works)? The grayscale volume does currently look like it has decent enough spatial data, but I could be wrong.

Model generation from grayscale volume: switch to Segment editor module, create initial segmentation with Threshold effect, and follow up with Islands effect to remove speckle and Smoothing effect to reduce noise. If in Threshold effect you cannot find a threshold value that gives decent results, then paint a few strokes in the fluid with one segment, and paint a few strokes in the fetus with another segment using Paint effect; then use Grow from seeds to create a complete segmentation.

A post was split to a new topic: Ultrasound slice is not aligned with reconstructed volume