Hello there,
I am a new user of Slicer but I have read through the wiki and other related help forums. I have been having issues with rendering from a stack of images, specifically using the SlicerMorph ImageStacks utility to convert a stack of png files (which were converted from full color jpg files and then scaled down to 25%) to a volume. The data is found here. The views of individual dimensions (X, Y, Z) operate as expected (though the interpolation could be smoother):
Is there a way to fix this? I have spent many hours trying to troubleshoot this myself and it has been quite fruitless. I would greatly appreciate any help.
First your dataset has 3076 px in X dimension. This is probably exceeding your gpu’s max3Dtexture dimension (it did in my intel GPU, which has a cap of 2048px). So you get an empty box. If you resample to a lower resolution. It renders
Your dataset is not aligned (there are shifts in object position from one slice to the next) and there is huge difference in resolution (50 times less data in Z slices). so it renders very poorly.
This looks like one of the brain atlases, i don’t think it is meant to be rendered in 3D (not with labels embeded in the image). Why are you trying to render this in 3D?
I would recommend converting this to a segmentation. It can then be nicely visualized in both 2D and 3D.
The simplest would be to load the the image as a segmentation by selecting “Segmentation” in the “Description” column when you load the nrrd file. You can then go to Segmentations module and click “Show 3D” to see in 3D:
However, for problem 1 (misaligned slices) Slicer does not have a good solution. You can find image stack alignment tools in ImageJ that you can use and then import the results into Slicer. Or, you can ask the authors of the atlas to provide you the atlas as a 3D image that contains the slices already aligned.
To my eye this data is not misaligned, it’s inconsistently segmented, since individual segments grow and shrink between slices rather than being consistently shifted one way or another. So probably you won’t get a good 3D reconstruction using this data, but you may be able to use it as a guide for doing a fresh fully 3D segmentation of a volumetric scan of the same species.