The volumes and segmentations I am working with provide a limited, localized perspective of the data when viewed one at a time. I need to be able load multiple, adjacent volumes with corresponding segmentations, and then inspect the greyscale volume data underneath the coloured segmentations, in the sagittal slice viewer.
Hopefully these images will clarify my goal:
Here I have 7 segmentation mask files, and one “active” volume (the greyscale underneath) in the sagittal slice viewer:
To effectively visualize the changes in the segmentation masks across a broader spatial context, I need to be able to see all 7 of the corresponding greyscale volumes at the same time as I scrub through the slice.
My understanding is that only one volume file can be actively displayed at a time due to the way the software manages rendering.
One slice view can display only two volume (one foreground and one background). However, you can have different volumes in different viewers. You can switch to layout (3x3 slices), and then load each volume (and segmentation) in a different slices, set them to the same plane and then link them.
When I change the layout, link slices, and try to scrub through the data only one volume is shown. Am I missing something?
Perhaps I need to approach this in a completely different way?
The goal is to be able to compare the segmentations to the underlying, ground truth data over and increasingly larger area: 9+ volumes. As well as scrub through the changes.
I wonder if rendering multiple volumes into one would work…
If you want to browse many non-overlapping volumes then you can stitch them all into a single volume. You can use the Stitch Volume module in Sandbox extension for this.
You can try to stitch just two volumes at a time. If it still does not work as expected then you could share a minimum set of images that can be used to reproduce the behavior and @mikebind may be able to have a look at them.
The current version of StitchVolumes assumes that supplied images are arranged along a single axis and are stitched together along that axis. The primary imagined use case was for medical imaging where you might have multiple CT or MR images taken at different locations as the scanner bed slides along the bore of the machine along one axis. What’s happened with your images is that the side-by-side images are competing for which one is shown, and each one wins for half the time, which results in getting the image data from one of them for the top half of the block, and from the other image in the bottom half of the block.
After a few different use cases came up where allowing stitching of arbitrary spatial arrangements of image volumes would be helpful, I reworked my code to allow that and improve the blending options for overlap regions. Since it was more mature, I had planned to release it as a separate Slicer extension and drop it from the Sandbox module. However, I didn’t finish getting through all the steps of the extension development checklist to get it really registered before I needed to turn my attention to other projects.
Separately, looking at your images, how goes the Vesuvius Challenge? I tried to see if I could advance the scroll segmentation tasks about a year ago, but didn’t end up having enough free time to actually get anywhere. It’s a fascinating problem though!