CT to optical image registration

Hi Andras,

I tried the latest version and also tried not to center the images this
time, and it worked (previously the images were centered before
registration). So I got good alignments when I set the fixed and moving
image as follows:
Fixed image: optical (0.05, 0.05, 3 millimeter spacing)
Moving image: Ex vivo ( 0.15, 0.15, 0.15 millimeter spacing )

But the problem for now is to do the reverse registration. In other words,
I’m interested in doing the registration in such a way that my optical
images are moving and my CT ex vivo images are fixed (the spacing is the
same as indicated above). When doing that it shows me good overlap but I
can’t save the hardened image. Please see the Error message attached; I
also attached a few screen shots of before and after the alignment. I also
included a screen shot of after hardening. But I don’t have an image after
reloading the hardened image, because, as I explained, the hardened image
couldn’t be saved. But I included the message generated after trying to
save it. Based on your knowledge, how would I be able to perform such
registration in Slicer?

I also noticed in our last email conversation you suggested that Slicer can
work for most of the analysis and visualization. So my question is can I
calculate the registration errors based on measures of Mutual Information
and Target Registration Error using this software? If yes, can you please
provide some help on how this is done?

Thank you,
Niousha

Never center images if you need proper alignment. If centering is enabled then it forces Slicer to ignore image origin information that is stored in the image header.

Slicer can compute the inverse of any transformation (linear, bspline, thin-plate spline, displacement field; or any combination of these in any order, with/without inversion in each component) by a single click. Go to Transforms module and click Invert button in Edit section. You can then align the fixed image to the moving image by applying the inverse transform to the fixed image.

In Slicer you can choose from a wide range of image registration methods: intensity-based, landmark-based, surface-based; with various metrics and transformations. For intensity-based registration with warping transform I would recommend to try these:

Compute TRE: Mark landmarks in Fixed and Moving volume using Markups module. Apply computed transform to landmarks of Moving volume. Write a short script (2-3 lines of Python code) that compute distance between Fixed landmarks and transformed Moving landmarks.

Hi Andras,

I tried the inverse transform the way you explained and I can see the
change in the image but I still have the same issue as before: the hardened
image can not be saved and the same error message appears. I couldn’t
figure out what the issue is. Any thoughts on that?

As for the intensity based registration, I’m more interested to know if
there is any way that the registration can be quantified (not performed)
based on Mutual Information, which is an intensity measure?

Thank you,
Niousha

Saving should be no problem, but maybe if it’s a color image there may be issues. Could you try if saving works if you convert the image to scalar (grayscale) volume (using Vector to scalar volume module)?

Mutual information minimizes the entropy of the joint histogram. You can find the final value in the application log (the metric value should decrease in each iteration), but it’s not a metric of registration quality.

Registration is typically assessed using spatial error metrics. If you have ground truth landmark points then compute TRE. You may also compute Hausdorff distance using Segment Comparison module (between ground truth segmentation in the fixed image and segmentation in moving image transformed to fixed image coordinate system).

The only intensity-based metric that I’m aware of that might be somewhat applicable for assessing registration (or, more accurately, the effect of remaining registration error) is the module for comparison of dose volumes (in SlicerRT extension).

Hi Andras,

Thanks for the info. After many trials, I realized that the images should
be cropped and turned into gray scale both in 3D Slicer, and if those are
initially done in MATLAB then they won’t work in 3D Slicer.

I also found a feature in 3D Slicer called Metric Test, that seems to
enable us to calculate MMI and MSE. There was however no documentation for
this feature that I could find. I was wondering is this something that I
can reliably use for calculating the MI? I used two identical images with
exact overlap. For that I used different histogram bin numbers that ranged
from 50 to 1000. Then the MI values I got ranged from -0.925 to -1.79812. I
was wondering how can one interpret these numbers, also what is the
suggested histogram bin number to use?

Thanks,
Niousha

What do you mean? What do you expect to happen and what happens?

I think the only help is in the tooltips. Let us know if anything is unclear.

Yes.

It is the metric that is minimized during registration. Read about the metric in journal papers, such as https://www.researchgate.net/publication/222899257_BRAINSFIT_Mutual_information_registrations_of_whole-brain_3D_Images_using_the_insight_toolkit.

Usually registration is not very sensitive to the number of bins. You can probably use the default value or slightly change it and see if it makes results better. See more details of mutual information metric in journal papers.

Hi Andras,

Thanks for your previous answers. I had some questions about the Metric
Test to calculate MI; I’m interested in obtaining the MI value of a
specific region from the images rather than the entire image sets. I
understand that this normally can be done using Masking but I couldn’t find
a place where we can input the masked images in addition to the fixed and
moving image. Is this something that can be done when running the Metric
Test for MI? If not, what are the alternatives?

Thanks,
Niousha

You have two options:

  • Option A: Extend the Metric Test module to accept mask images. You need to build Slicer from source code and be somewhat familiar with C++ programming (so that you can find out where to add the masking option).
  • Option B: Create a simple scripted module that computes the metric at different positions. You can check out SimpleITK examples for registration here (for example, a registration initialization example)

Hi Andras,

Can you provide some instructions on how to access the source code? I
followed the instructions provided in here:
https://www.slicer.org/wiki/Documentation/Nightly/Developers/Build_Instructions#CHECKOUT_slicer_source_files
but it wasn’t clear to me what should be done next and how to access the
(i.e. Metric Test’s )source code from there?

I’m really new to this so your helps are really appreciated.

Thanks,
Niousha

After build is complete, you’ll see the source files (in c:\D\S4D\BRAINSTools\BRAINSFit\PerformMetricTest.cxx).

Hi Andras,

Do we need to do any pre-processing step before we import images that are
modified in MATLAB?

The problem is anytime I do some modification on my .jpg images in MATLAB
(i.e rotation, gray scale, cropping), then those images will change in
strange and unexpected ways when exported back to Slicer.

Thanks,
Niousha

Do you work with color images? 2D or 3D?

Hi Andras,

Yes, the images are colored, 2D initially. So what I do is I align them in
MATLAB using control point registration, then save each individual modified
2D image and then stack those in 3D Slicer. I then change the stack into
gray scale in 3D Slicer.

Niousha

I align them in MATLAB using control point registration, then save each individual modified
2D image and then stack those in 3D Slicer

This should work. What problem do you see? If you just browse the aligned images in an image viewer, do they appear correctly, nicely aligned?

Hi Andras,

I kind of figured out what the problem was. I should have used imwrite to
save the images properly to avoid this issue. But thanks for following up.
I was wondering does slicer support combining multiple transforms that can
be applied to an image? For example, consider having 4 images, (i.e. image
1 to image 4), with the ultimate goal of mapping image 4 such that it
aligns with image 1, can we find a final transformation matrix in 3D Slicer
such that when we apply that to image 4 we can match image 4 to image 1?
Please note that we already have obtained the transformation matrices
between, image 2 and 1, image 3 and 2, and image 4 and 3.

Thanks,
Niousha

Yes, Slicer can concatenate any number and types of transforms (both linear and warping). You can build a transform tree by drag-and-dropping transforms and volumes in Data module / Transform hierarchy tab. Probably you’ll need something like this:

image

A post was split to a new topic: Inverting Elastix transform

Hi Andras,

You mentioned briefly here CT to optical image registration - #11 by lassoan that the only help on metric test is in “Tooltips”. I can’t find it. Where is that?

Tooltips are the small windows that are displayed when you keep your mouse steady over a widget:

image

1 Like