Working on a CZI Zeiss microscopy image stack reader for 3D slicer, any help or advice welcome

Some general info:

Operating system: Windows 10
Slicer version: 5.3.0 - 2023.06.17
Expected behavior: 3D stack(s) imported from a CZI scene with minimum hassle for the user…
Actual behavior: Work in progress, 3D view requires manual adjustments, not fully integrated with Slicer yet.

Hello,

this is my first post here, I am trying to import a multichannel CZI confocal stack into 3D slicer, and found some Python code on GitHub for a napari reader, which I am repurposing for Slicer.

I’m still very far from having a working reader, but the code I have put together is showing some promise:

As I’m not quite sure what I’m able to attach to my first post, here is the code in its entirety:

Click to expand code (czireader.py)

Called as:

In the console, I added my working path with a sys.path.append() and then:

fn = 'an_image.czi'
czireader.reader(fn)

For some reason, I was not able to do this from the Jupyter plugin:

File C:\PortableApps\Slicer 5.3.0-2023-06-17\bin\Python\sitkUtils.py:21, in PushVolumeToSlicer(sitkimage, targetNode, name, className)
19 # Create new node if needed
20 if not targetNode:
—> 21 targetNode = slicer.mrmlScene.AddNewNodeByClass(className, slicer.mrmlScene.GetUniqueNameByString(name))
22 targetNode.CreateDefaultDisplayNodes()
24 myNodeFullITKAddress = GetSlicerITKReadWriteAddress(targetNode)

AttributeError: module ‘slicer’ has no attribute ‘mrmlScene’

The code so far:

# Original code lifted almost entirely from:
# https://github.com/BodenmillerGroup/napari-czifile2/blob/main/napari_czifile2/_reader.py
# https://github.com/BodenmillerGroup/napari-czifile2/blob/main/napari_czifile2/io.py
# All credit goes to @jwindhager https://github.com/jwindhager

import SimpleITK as sitk
import sitkUtils
import slicer

from multiprocessing import cpu_count

from pathlib import Path
from typing import Iterable, List, Optional, Union
from xml.etree import ElementTree

import numpy as np
from czifile import CziFile, DimensionEntryDV1, DirectoryEntryDV
from tifffile import lazyattr


class CZISceneFile(CziFile):
  @staticmethod
  def get_num_scenes(path: Union[str, Path], *args, **kwargs) -> int:
      with CziFile(path, *args, **kwargs) as czi_file:
          if "S" in czi_file.axes:
              return czi_file.shape[czi_file.axes.index("S")]
          return 1

  def __init__(self, path: Union[str, Path], scene_index: int, *args, **kwargs):
      super(CZISceneFile, self).__init__(str(path), *args, **kwargs)
      self.scene_index = scene_index

  @lazyattr
  def pos_x_um(self) -> float:
      return self.scale_x_um * min(
          (dim_entry.start for dim_entry in self._iter_dim_entries("X")), default=0.0
      )

  @lazyattr
  def pos_y_um(self) -> float:
      return self.scale_y_um * min(
          (dim_entry.start for dim_entry in self._iter_dim_entries("Y")), default=0.0
      )

  @lazyattr
  def pos_z_um(self) -> float:
      return self.scale_z_um * min(
          (dim_entry.start for dim_entry in self._iter_dim_entries("Z")), default=0.0
      )

  @lazyattr
  def pos_t_seconds(self) -> float:
      return self.scale_t_seconds * min(
          (dim_entry.start for dim_entry in self._iter_dim_entries("T")), default=0.0
      )

  @lazyattr
  def scale_x_um(self) -> float:
      return self._get_scale("X", multiplier=10.0**6)

  @lazyattr
  def scale_y_um(self) -> float:
      return self._get_scale("Y", multiplier=10.0**6)

  @lazyattr
  def scale_z_um(self) -> float:
      return self._get_scale("Z", multiplier=10.0**6)

  @lazyattr
  def scale_t_seconds(self) -> float:
      return self._get_scale("T")

  @lazyattr
  def channel_names(self) -> Optional[List[str]]:
      if "C" in self.axes:
          channel_elements = self._metadata_xml.findall(
              ".//Metadata/Information/Image/Dimensions/Channels/Channel"
          )
          if len(channel_elements) == self.shape[self.axes.index("C")]:
              return [c.attrib.get("Name", c.attrib["Id"]) for c in channel_elements]
      return None

  @lazyattr
  def is_rgb(self) -> bool:
      return "0" in self.axes and self.shape[self.axes.index("0")] > 1

  def as_tzcyx0_array(self, *args, **kwargs) -> np.ndarray:
      data = self.asarray(*args, **kwargs)
      tzcyx0_axis_indices = []
      if "T" in self.axes:
          tzcyx0_axis_indices.append(self.axes.index("T"))
      else:
          tzcyx0_axis_indices.append(data.ndim)
          data = np.expand_dims(data, -1)
      if "Z" in self.axes:
          tzcyx0_axis_indices.append(self.axes.index("Z"))
      else:
          tzcyx0_axis_indices.append(data.ndim)
          data = np.expand_dims(data, -1)
      if "C" in self.axes:
          tzcyx0_axis_indices.append(self.axes.index("C"))
      else:
          tzcyx0_axis_indices.append(data.ndim)
          data = np.expand_dims(data, -1)
      tzcyx0_axis_indices.append(self.axes.index("Y"))
      tzcyx0_axis_indices.append(self.axes.index("X"))
      if "0" in self.axes:
          tzcyx0_axis_indices.append(self.axes.index("0"))
      else:
          tzcyx0_axis_indices.append(data.ndim)
          data = np.expand_dims(data, -1)
      for axis_index in range(len(self.axes)):
          if axis_index not in tzcyx0_axis_indices:
              tzcyx0_axis_indices.append(axis_index)
      data = data.transpose(tzcyx0_axis_indices)
      data.shape = data.shape[:6]
      return data

  def _iter_dim_entries(self, dimension: str) -> Iterable[DimensionEntryDV1]:
      for dir_entry in self.filtered_subblock_directory:
          for dim_entry in dir_entry.dimension_entries:
              if dim_entry.dimension == dimension:
                  yield dim_entry

  def _get_scale(self, dimension: str, multiplier: float = 1.0):
      scale_element = self._metadata_xml.find(
          f'.//Metadata/Scaling/Items/Distance[@Id="{dimension}"]/Value'
      )
      if scale_element is not None:
          scale = float(scale_element.text)
          if scale > 0:
              return scale * multiplier
      return 1.0

  @lazyattr
  def _metadata_xml(self) -> ElementTree.Element:
      return ElementTree.fromstring(self.metadata())

  @lazyattr
  def filtered_subblock_directory(self) -> List[DirectoryEntryDV]:
      dir_entries = super(CZISceneFile, self).filtered_subblock_directory
      return list(
          filter(
              lambda dir_entry: self._get_scene_index(dir_entry) == self.scene_index,
              dir_entries,
          )
      )

  @staticmethod
  def _get_scene_index(dir_entry: DirectoryEntryDV) -> int:
      scene_indices = {
          dim_entry.start
          for dim_entry in dir_entry.dimension_entries
          if dim_entry.dimension == "S"
      }
      if len(scene_indices) == 0:
          return 0
      assert len(scene_indices) == 1
      return scene_indices.pop()

def reader(paths):
  layer_data = []
  if not isinstance(paths, list):
      paths = [paths]
  for path in paths:
      num_scenes = CZISceneFile.get_num_scenes(path)
      for scene_index in range(num_scenes):
          with CZISceneFile(path, scene_index) as f:
              if f.is_rgb:
                  continue
                  
              data = f.as_tzcyx0_array(max_workers=cpu_count())
              # https://github.com/BodenmillerGroup/napari-czifile2/issues/5
              contrast_limits = None
              if data.dtype == np.uint16:
                  contrast_limits = (0, 65535)
              # https://github.com/napari/napari/issues/2348
              data = data[:, :, :, :, :, 0]
              shape = data.shape #t z c y x

              for t in range(shape[0]):
                  for c in range(shape[2]):
                      
                      #Here we load the volumes one at a time:
                      image_sitk = sitk.GetImageFromArray(data[t,:,c,:,:])
                      image_sitk.SetOrigin((f.pos_x_um, f.pos_y_um, f.pos_z_um))
                      image_sitk.SetSpacing((f.scale_x_um,f.scale_y_um,f.scale_z_um))
                      volumeNode = sitkUtils.PushVolumeToSlicer(image_sitk,None)
                      volumeNode.SetName(f.channel_names[c])
                      #volumeNode=slicer.mrmlScene.AddNewNodeByClass("vtkMRMLScalarVolumeNode")
                      #volumeNode.SetAndObserveImageData(imageData)
                      #volumeNode.SetSpacing(spacing)
                      #volumeNode.SetOrigin(origin)
                      #slicer.util.setSliceViewerLayers(volumeNode, fit=True)

                      slicer.util.setSliceViewerLayers(background=volumeNode, fit=True)

So, as you can see, I’m still struggling with really basic things such as:

  • reading colors for each channel in the CZI file, and setting them for each of the imported volumes
  • maybe some kind of GUI interface for the user to select which scene to import (there may be multiple scenes and timepoints, not sure how to handle those nicely yet)
  • units: The code I lifted from @jwindhager conveniently provides a pixel size in microns, but they’re set as mm in Slicer (check whether the units are always mm or if it is possible to set the units)
  • Follow the help provided in the Slicer help to turn the code into a reader plugin
  • A few manual operations required to actually display the volume, maybe I can set more things in the reader to avoid the manual steps.
  • How big can the images be? For now I’m not sure…
  • RGB images? For now I’m only interested in confocal stacks, but maybe histology slides could also be handled through a potential future reader. For now I’m not going to worry too much about this.

And what would be the end goal?

After I finish writing this plugin, the two main image processing tasks I will need to perform are:

  • 3-D registration of multiple acquired stacks using the DAPI nuclei as landmarks
  • Segmentation of each cell in 3-D if possible (cellpose?)

And in terms of data output:

  • Size / shape of cells and
  • Mean intensity in all the different channels (up to 6) for each cell.

That’s it, thank you for reading and any advice you might have will be greatly appreciated!

Kind regards,
Egor

It is awesome that you work on this and the volume rendered image looks very nice (you can make it even more impressive by adding some colors to the color transfer function in Volume rendering module).

Napari io plugins are nice in that they don’t depend on napari itself, so it is really easy to import them and use in Slicer:

# CZI file reading using napari-czifile2 plugin in 3D Slicer
# Tested with: https://downloads.openmicroscopy.org/images/Zeiss-CZI/idr0011/Plate1-Blue-A_TS-Stinger/

filepath = r'c:\Users\andra\Downloads\Plate1-Blue-A-12-Scene-3-P3-F2-03.czi'
timePoint = 0
component = 0  # set to `None` to read as vector volume, set to 0, 1, 2, ... to read a single component as scalar volume

# Install napari-czifile2
try:
    import napari_czifile2
except ImportError as e:
    pip_install('napari-czifile2')
    import napari_czifile2

import numpy as np

reader = napari_czifile2.napari_get_reader(filepath)
result = reader(filepath)

voxelsZYXC = np.moveaxis(result[0][0][timePoint], [1], [3]) # z c y x -> z y x c

if component is None:
    # Read RGB volume    
    volumeNode = slicer.mrmlScene.AddNewNodeByClass('vtkMRMLVectorVolumeNode')
    voxels = voxelsZYXC
else:
    # Read scalar volume from a single component
    volumeNode = slicer.mrmlScene.AddNewNodeByClass('vtkMRMLScalarVolumeNode')
    voxels = voxelsZYXC[:, :, :, component]

scaleZYX = result[0][1]['scale'][1:]
volumeNode.SetSpacing(scaleZYX[2], scaleZYX[1], scaleZYX[0])

To show the image in slice/3D views:

slicer.util.updateVolumeFromArray(volumeNode, voxels)
slicer.util.setSliceViewerLayers(volumeNode, fit=True)

if component is not None:
    vrLogic = slicer.modules.volumerendering.logic()
    vrDisplayNode = vrLogic.CreateDefaultVolumeRenderingNodes(volumeNode)
    vrDisplayNode.SetVisibility(True)
    # Use the same window/level and colormap settings for volume rendering as for slice display
    vrDisplayNode.SetFollowVolumeDisplayNode(True)
    # Recenter 3D view
    slicer.util.resetThreeDViews()

The advantage of using the napari reader as it is would be that we would automatically benefit from all fixes and improvements of the reader. It would also serve as a good example for using other napari reader plugins in Slicer.

  • reading colors for each channel in the CZI file, and setting them for each of the imported volumes

See my example above

  • maybe some kind of GUI interface for the user to select which scene to import (there may be multiple scenes and timepoints, not sure how to handle those nicely yet)

You could add an IO options widget (that is displayed in the “Add data” window when you check “Show options”, but I’m not sure if this would be convenient/dynamic enough.

  • units: The code I lifted from @jwindhager conveniently provides a pixel size in microns, but they’re set as mm in Slicer (check whether the units are always mm or if it is possible to set the units)

When you read the image the data must be converted to the current length unit, which is millimeter by default. In application settings you can set displayed length unit to micrometer (which means that values are stored as millimeter but when displayed on the GUI the value x 1000 is shown and the µm suffix instead of mm).

selectionNode = slicer.mrmlScene.GetNodeByID('vtkMRMLSelectionNodeSingleton')
lengthUnitNode = slicer.mrmlScene.GetNodeByID(selectionNode.GetUnitNodeID('length'))
lengthUnitNode.GetSuffix()  # returns 'mm' by default
lengthUnitNode.GetDisplayCoefficient()  # returns 1.0 by default

# After you set displayed length unit to micrometer in the application settings:
lengthUnitNode.GetSuffix()  # returns 'µm'
lengthUnitNode.GetDisplayCoefficient()  # returns 1000.0

If you work with microscopy image then it may make sense to change the actual unit to micrometer by setting Suffix to µm and Coefficient to 1.0 because you could avoid potential numerical instabilities and GUI inconveniences due to using too small numbers. After setting these, you would then get:

lengthUnitNode.GetSuffix()  # returns 'µm'
lengthUnitNode.GetDisplayCoefficient()  # returns 1.0

In the reader you could just handle these 3 cases (mm suffix with 1.0 coefficient; µm with 1000.0; µm with 1.0) and refuse to load the images if a different length unit is set.

This should be fairly straightforward - see an example in Slicer core or this example in Sandbox.

My code snippet above should help with this but you can ask more details here.

Whatever fits into your virtual memory space. If you work with multiscale images then it may be useful to load the image at the current display resolution, and only those tiles that are visible. You can find some experimentation with that here.

The example above shows how to load a multi-component image and if it has 3 components then it is interpreted as RGB by default. You need to cast the values to unsigned char if you want to use the GPU volume renderer for displaying it in 3D views.

1 Like

Thank you @lassoan for your awesome reply!

Lots to digest, so it may take me some time to come back with some significant progress, but I’ll let you know how this progresses. I’m happy to develop any plugins in the open.

At least, 3D Slicer seems well suited for working with these types of microscopy images. I really like the fine tuning of the renderer look-up tables (both colour and opacity) which is not something I’m used to from the usual microscopy oriented tools I use (e.g. Imaris). And you’re absolutely right, adding colours makes the image way more interesting :wink: :
image

Kind regards,
Egor

1 Like

Nice job with the coloring! Keep us updated about your progress.

1 Like

Thanks @lassoan,

I now have a very early but working reader! I still need to implement sequences when time series are detected (possibly taking inspiration from the .npy reader). Also, colours , but I will need to read them from the Napari reader first and maybe improve that module first.

So, I’ll make a pull request for my ImportCZI reader and let you decide whether the code is worth including into Slicer at some point (or not :slight_smile: ).

Is it better to direct the pull request at the Slicer or to the SlicerSandbox repository?

Cheers,
Egor

1 Like

Since the reader will be under development for a while, it depends on external Python packages, and it is a quote specialized file format, it makes sense to maintain it in an extension. The Sandbox extension would be suitable or it could be added to the BigImage extension. If more microscopy-specific modules will be developed then it could make sense to add a microscopy extension.

1 Like

Hello @lassoan,

I’ve been working on a pull request for hex colors in napari image layers. Then when this is committed, I will try to get channel colours added into napari-czifile2 module.

It may take some time before all the pieces are in place for the slicer CZI reader, but I’ll push my code to a fork of the Sandbox and keep this thread updated with any progress.

I’ll adapt your pip commands to install czi-file2 from source from my own github branch while this is going on.

If more microscopy-specific modules will be developed then it could make sense to add a microscopy extension.

Actually, this may not be a bad idea as a colleague of mine is interested in a ND2 (Nikon) reader for some of his confocal stacks. His are mostly 5-D (3D + channels + time).

… So that’s at least two new microscopy related extensions! :slight_smile:

Cheers,
Egor

2 Likes

It’s great to see this progress @EgorZindy I think there’s a lot of potential to work together on this.

The SlicerMicro github organization was a start on something like this and could be repurposed and expanded if it meets the needs. We can add new members if we want to use it.

Also here are summary slides of some previous work on Slicer for Microscopy and thoughts about what more is needed.

2 Likes

Hi EgorZindy,

This is great! I am also working with light-sheet and confocal data in 3D. I am looking forward to seeing the progress. 3D-Slicer is a really promising platform for such data. I believe we all need to create a subtitle to gather all microscopists and create a plugin depending on the needs.

Commercial software is (such as Imaris) really ripping off universities and research institutions for licenses.

2 Likes

It would be great if people could start listing any missing features that would be helpful for these applications. Maybe start adding ideas in this thread and then they can be migrated to an appropriate issue tracker as the are formalized.

One wish-list item might be to modernize this IASEM extension to work with the latest Slicer features if it hasn’t already been done. It had a nice feature with a low-res proxy volume that you could interactively sample at full resolution using a box ROI.

2 Likes

Hi everyone,

definitely interested in any microscopy related use of 3-D slicer!

I made some (slow) progress this week. My pull request for hex colors in napari was accepted, which means I can have a look at the slicer reader again. Eventually, instead of pulling the napari-czifile2 module from my github, my czi reader should be able to use the pypi one (since napari itself will be able to import channel colours).

I’ll update you when I get round pushing the reader code to my own Sandbox branch.

Cheers,
Egor

1 Like

Hi everyone,

I got a bit further! I noticed that the czitools repository had an interesting reader that can read data into dask arrays, so I decided to redesign my Slicer importer around czitools instead of the napari reader I used before.

For now, I install czitools directly from my own github czitools fork (one that isn’t dependent on napari, pandas,…) but in the long term, I hope I can just install the official czitools instead.

Since I install my fork via git+pip, a message is displayed if the git tool is not accessible from 3D Slicer.

The two repositories are:

and ImportCZI in

I’m going to try and make a little dialog box (ImportOsirixROI has one) to

  • choose whether to import a numpy or a dask array
  • choose which scene to load (for now only scene 0 is imported)
  • choose the colours for each channels (preloaded with the colours found in the metadata).

I’m not super confident with Qt / Designer, so I’ll let you know how I get on! :slight_smile:

Cheers for now,
Egor

1 Like