Best data import practices for microscopy

We are testing whether we can use Slicer for a project that relies on confocal stacks and I wanted to hear people’s experience.

First step is figuring out the data import. The original data comes in Leica’s proprietary format, we can read it into Fiji. From Fiji we can either export a multichannel TIFF (in this case two channels), or we can make a composite RGB image and export it as TIFF.

First option is more flexible (as we can potentially manipulate volumes individually), but as far as I can tell, it doesn’t import correctly into Slicer. Slicer simply show one slice from one volume followed by the slice from the second volume.

I am assuming we need to import each channel separately, if we want to process them individually?

If I do import RGB image, how to we control the rendering (eg., I only want to see what’s in G channel for screenshot, follwoed by joint)?

I don’t have a lot of experience with microscopy in Slicer, but I did need to load some big images one time and needed to install a python tiff library to open them and then did some manipulations to make them work cleanly with Slicer. I would say we either need to extend ImageStacks or make a new custom module that works with the various specific formats, like SlicerHeart has for ultrasounds.

Some of the popular microscopy file format libraries are GPL, so we could make hooks and have the user install the packages but that shouldn’t be a big deal. As I understand it there are dozens if not hundreds of weird proprietary variants to consider so using an external converter tools is probably the way to go.

OME-Zarr (NGFF) is expected to clean up the microscopy data format mess and we plan to support that in Slicer, as a general-purpose, Python and web friendly, modern biomedical image file format.

Some work will still be needed to properly visualize multi-channel images, because currently we mostly expect scalar volumes (and there is some support for RGB/RGBA volumes). Sequences could be a potential data representation, as it can expose any number of channels in the scene, each as a separate scalar volume. Or we could create a new module, which could add a new displayable manager, which could show arbitrary number of layers from a multichannel image; or make the current vtkMRMLSlicerLogic smarter to support any number of channels. This all would probably require dedicated funding.

1 Like

these all sounds great. Looking forward to it.

At the moment, actually importing each channel as their own volume and using the multi-volume rendering works really good. I am pleased to see shading support in multi-gpu rendering. Hopefully crop will come soon too!

My colleagues were quite impressed, particularly the ease of 3D navigation.

2 Likes

Has there been any progress in ome-zarr support, beyond a gist @pieper shared a year ago? Some microscopy folks around here are interested in using Slicer, but their data is in zarr.

There’s this new extension: GitHub - gaoyi/SlicerBigImage: Large (GB and above) scale microscopic image computing using 3D Slicer

1 Like