New extension: Synchronised Navigation with Registration

I created a new extension for viewing registrations. It implements what is sometimes called “synchronised navigation” - simultaneous visualization of the same spatial location across multiple image series

The basic idea is that we load fixed and moving images, the corresponding registration, and when we move the cursor in one image, a markup follows in the second image (the position of the markup is calculated based on the registration).

Here is a sort video demo:

Some PACS providers, like Sectra, implement this feature (either via slices manually linked by the user or by a commercial registration algorithm).

I have already been using this with a few radiologists, and they seem to like it (when the registration is accurate).

Is this interesting to this community?
If yes:

  • Should it have more features?
  • Should it be a standalone extension, or does it make sense to integrate it somewhere?

This is in some way similar to this concept Views synchronization after registration, however, the main difference is that it shows the original moving image. As far as I understand, radiologists don’t always want to see a deformed moving image.

The code for anyone interested is available here:

How did I implement this (in short)?

  • create 6 vtkMRMLMarkupsFiducialNode (one for each view)
  • show the nodes in all views except for the one where the cursor is
  • use SetSliceOffset() to set the offset of all views to the position of the corresponding Fiducial node (except for the one where the cursor is)
    • for the views from the second image, first transform the corresponding fiducial nodes with the transformation and then set the offsets)

One small feature that I included is that when the user double clicks on a view, e.g., the Red one, a compare view opens with the Red and Red+ views side by side.

1 Like

This is superb. Thanks a lot for your efforts.

1 Like

Hi @koeglfryderyk I agree this is useful functionality.

But did you look at the CompareVolumes module that already exists in Sllicer? As far as I know it already has the features you described, except the part where you show the correspondence but not the transformed volume. That sounds nice.

Plus CompareVolumes offers several other features, like an arbitrary number of volumes (selectable from all the loaded volumes, with the option to drag-and-drop to specify the order of the display), hot-linking pan/zoom/scroll operations, some animation modes for cross-fading between volumes, and a ‘compare cursor’ that provides an optionally magnified checkerboard inspection tool.

CompareVolumes was created to support some earlier research and it could certainly benefit from improvements, so maybe we could join forces?

1 Like

I’d love to join forces on the Compare Volumes module!

I could fork your repo and implement the synchronised navigation feature, but I think we have to discuss a few things

Synchronised navigation feature

Initially, I thought I could implement it with the built-in crosshair, but for this feature, two decoupled crosshairs would be needed - one for the fixed and one for the moving. As far as I understood, you can only have one crosshair; creating another one would involve significant changes to the MRML Lib in Slicer (I think in vtkMRMLCrosshairDisplayableManager)

Therefore, I implemented it with markups for each view, which can then be transformed with the registration- it works well, but seems a bit hacky - do you have a better idea for how this could be done?

UI

  1. How should the user choose the transformation?
  2. How should the user choose the fixed and moving images?
  3. How should the user enable the synchronisation (I would also like to have a keyboard shortcut in addition to a button)

For example, like this:

Hi @koeglfryderyk -

What’s the nature of your transformations and what is the area of interest for your research (i.e. torso CT mainly?). That is, what is your target use case, since things like UI options and keyboard shortcuts are best considered in the context of a concrete clinical application.

I agree it’s not good to show clinicians distorted scans, as that could lead to misinterpretation. And yes, showing the corresponding points using two crosshairs would make sense, but using a markup as a fallback is also not a bad idea. If you look at the LandmarkRegistration module (code here), which uses the CompareVolumes logic under the hood, there is some code for managing per-volume markup lists of corresponding points. This way they can be moved interactively in either view and the other views are update through the transform to show the corresponding point. It’s probably very similar to the code you have, and maybe we can factor it out for reusability.

Another thing I’ve been playing with related to this is using segmentation results (e.g. from TotalSegmentator) to define correspondence between scans even when the patient’s body position is different. I.e. if you are looking at a tumor that’s a few centimeters from the femur, then registering the two femurs with a rigid transform puts the two tumors in close approximation even if the hip angles are very different in the two scans. There’s a prototype implementation here, where you can pick the anatomy you want to use for the local registration. Of course the registration is very good close to the rigid structures but can be very bad far from the selected structures. One feature I considered is that when you pick a point to compare, the either the single closest structures is used, or maybe the nearest few are used to define a blended rigid transform.

It would be great if we could develop layers of functionality, from general utilities through very dedicated task-specific interfaces.

Until now I used my extension for rigid and deformable transformations - I was hoping to make a general tool for any transformation type.

I guess you’re right, let’s not focus on the UI options and shortcuts for now, I can always just create them for my specific use case.

I’m currently evaluating registration algorithms with this tool in a reader study with a head & neck CT dataset (the example from my video was just from the Learn2Reg LungCT dataset as I can’t share my head & neck data).

The local registration seems really interesting, my radiology Prof already mentioned multiple times that he’s usually only interested in registration at specific sites of interest. Could you update the link to the prototype implementation of the local registration? I get a 404 error, maybe this repo is private?

Then my question is what should be our or my next steps?

I could take a look at the LandmarkRegistration code and see if there is any common code to factor out.

Ah, right - the repo was private since it’s a work in progress, but I made it public now.

In terms of next steps, it would be great if you could try the CompareVolumes on some of your data an registration results and see if the features and Lib code are a good fit for what you need. For example, in the LandmarkRegistration it sets up three rows of axi/sag/cor with the fixed, moving, and blended fixed/moving, which I find helpful for reviewing segmentation. You can do similar things with CompareVolumes by setting one of the volumes as the common background and unselecting it from the checklist. But I think the workflow is probably not so intuitive and maybe we can repackage into a dedicated one-to-one registration review module, like your current version but with more features.

1 Like

thanks

I used CompareVolumes before and liked the functionality. I think we could use most of the code and then, e.g., just integrate my code into it.

The only thing that was not intuitive for me was that I had to click ‘Compare’ Checked Volumes’ every time I made a change, but this can easily just be explained or done automatically when the user makes some changes (e.g., checks a different box).

We could, for now, limit the module to two images only (Fixed and Moving), and in the future, think of integrating more images and deformations if we have, e.g., a sequence of registered images from multiple time points.

I think we could have two modes:

  • Mode 1:
    • Use the deformed moving image (as it is now in CompareVolumes and Landmark Registration)
  • Mode 2:
    • Use the original moving image and link the images with the registration

In terms of features, I would keep all CompareVolumes features (common background/label, hot link cursor, visualization options, and the layer reveal cursor), and maybe add two more features:

  • Difference image between the Fixed and deformed Moving
  • Jacobian Determinant of the displacement field, as this is often used in papers to evaluate the quality of the displacement field (folding, volume growth, shrinkage)

Sounds good @koeglfryderyk - it makes sense to me that we have generic functionality in the CompareVolumesLogic and also make generic widgets that can be reused in various special purpose modules like the one you want with only two volumes and registration comparison features. We can still have the CompareVolumes module itself expose the more generic features for selections from all the volumes.

Regarding determinant of the Jacobian, maybe that should be added to the Transforms module visualization, and then the registration inspection module it could have streamlined ways of enabling it or other transform visualization modes.

Hi @pieper,

I incorporated the orientation widget, hot linking, and view management from CompareVolumes, while trying to reuse as much code as I could (from CompareVolumes and RegistrationLib).

The code is available here, and some sample data is available here.

I left out all the functionality concerning the visualisation of overlays (common background/label, cursor reveal, flicker/rock, etc), as to me, these functionalities don’t make sense when working with the original moving image (and not the warped one).

From the CompareVolumes module, I imported the CompareVolumesLogic class - that was pretty straightforward. The only thing that could be changed to help with my module is for viewerPerVolume and viewersPerVolume to return sliceNodesByViewName grouped by volume node.

However, I had to reimplement the orientation widget

- so maybe this could be a candidate for a generic widget

This is the current UI of my extension

If you can find a way to do this in a backwards-compatible way that would be great. If not maybe helper methods that re-organize the results could be added.

Yes, that could make sense.

I don’t know how much time/effort you have available but if you could contribute back to improve the slicer cade base that would be great.

I created a pull request to CompareVolumes where I implemented this in a backwards-compatible way by using a default false argument, through which an additional dict can be returned.

And I just realised that LanmdarkRegistration already has the orientation panel integrated into its Visualization widget, so I just reused it (by hiding the other elements, like it was done in CompareVolumes).

If you want, I can also integrate the orientation panel from LandmarkRegistration into CompareVolumes - currently, the orientation widget is made from scratch in CompareVolumes

Thanks again for working on this :+1:

Yes, if we can make the code cleaner and more reusable that’s great.