Create a smooth segmented 3D model from PNG images


I have been using the Visible Human project TIFF images to segment out the spinal cord and DRG. The DRG is not clearly visible but I want to get an idea of the relative vertical levels of the DRG and the Spine, especially in the lumbosacral region.

However, the issue is when I segment the images to create a 3D model, the model appears as a combination of layers of slices, which I segmented in each slice but not a continuous 3D model. How to get rid of this? Is it an issue with my segmentation or something inherent with the quality of images? I am attaching two pictures to demonstrate what I mean.

I have an additional question, the CT and MRI data from the VHP come in extensions of .fre and .t1/.t2 respectively. I am unable to load these in Slicer, neither via add data nor via add DICOM. Are these not supported?

If you use “Grow from seeds” effect, your need to click “Apply” if the previewed results are satisfactory.

Note that you click “Show 3D” button at the top of the effect list then it toggles visibility of the current segments (the seeds that you painted). If you want to preview the grown segments in 3D then click “Show 3D” button within the effect options section.


Sorry, I got confused.

I am segmenting the required slices and then clicking “Show 3D”.

I don’t see any other Show 3D option and when I use the Grow from seeds, there is no change in the visible 3D structure. (It doesn’t make any changes after I initialize.)

The “Show 3D” button for the result preview is next to the “Display” slider, on the right side. It is useful if you paint seeds in the image that you use for computing complete segmentations. See a very simple tutorial for segmenting a single object here:

By the way, the image appears to be distorted: spacing along I/S axis seems to be too low compared to other axes. And the images is flipped along the I/S axis and the A/P axis.

You can load any uncompressed images into Slicer. For example, you can do that conveniently using the RawImageGuess extension.

However, the VHP server contains the data set in so many representations that I would recommend to just read the images from a bit friendlier, more standard format.

Once I paint the slices in all three views, click “grow from seeds” and then initialize: Nothing Happens. There is no 3D volume created and thus the Show 3D next to the display appears inactive.

I should also mention that I am painting across multiple slices.

But before I rectify any of those, I suspect is the incorrect orientation the reason for this? If so, how do I reorient the images?

“Grow from seeds” (as most other effects) work in 3D, so it does not matter which slices you draw the input seeds, in what orientation.

Make sure you have at least two segments: one segment for the structure you are interested in, and one “background” segment for everything else.

I tried the options suggested.

First, I re-oriented the images, just to have a proper visual. Then I select two segments, the Spinal cord which I want to segment and the rest of the body, it takes an enormous amount of time to compute and comes up with something like this, which is grossly incorrect.

I understand this might be due to the quality of the images, but I am curious to know how to rectify/improve this 3D model. The painted segments look a bit different than what I selected, so I am guessing the algorithm is picking up pixels I have not selected while growing from the seeds.

Grow from seeds may take up to a few ten seconds for large images. If computation takes much longer for you then either your computer does not have enough RAM (in general, it is recommended to have 10x more RAM than your image size) or something else is wrong.

If save the scene into an .mrb file before you hit “Apply” in “Grow from seeds”, upload the file somewhere, and post the link here then I can have a look.

Thanks for your help! I appreciate it!

I think something else is wrong, I am using a work station so the machine shouldn’t be an issue.

Here is the link of the .mrb file:

Please let me know.

Is there also an effect of the number of seeds (either number of seed ‘blobs’ or number of seed voxels)?
I have tried 3D Slicer’s grow-from-seeds functionality before, in a previous version of 3D Slicer. It was indeed slower (despite running on a fast workstation with plenty of RAM), and I suspected it was because I kept adding more seeds to try to get a satisfactory result. (Sadly, I never got a satisfactory result with that scan!)