Is there a better solution for head registration?

we now have different registration solutions for head registration,like elastix,ants,brainsfit
we now have different serials for registration,like T1,T2,Flair,PCA,CT
if we choose T1 as fixed volume , Other serials as moved volume,what’s the better solution to make the better registration effect?

It sounds like you have a good solution. Can you list any problems you have with the current approach?

i don’t have any good solution pieper,i’m just confusing
i have some serials to registe,like T1,T2,CT,Flair
I found some commercial software like brainlab don’t have registration options like [use rigid],[use affine] etc. so i wonder how to do in slicer can Implement this function

Check out the SlicerANTs extension; it offers options like rigid, rigid+scaling, and deformable registration via SyN

yes,Ants is a awesome extension,as well as elastix,brainsfit.
in the hospital enviroment, many doctor in the small or middle hospital don’t known the different between rigid and affine ,so some commercial software only provide one registation button without options
i want to Implement the button in slicer,and now have two directions
1.find a way to give a score to the result of registration ,run several registration,choose the biggest score
2.find better parameters from modal A to modal B (like T1 to CT),and record the parameters in the database,when encounter the modal pairs latter,choose the parameters from database
both directions have many problems to there a better solution to do this?

Here are my thoughts… but you are have thought about this more than me so don’t take me too seriously:

(1) will potentially take a long time, making registration take several times longer than any single processing. If that’s not a problem for your application, then maybe (1) sounds good. You could use normalized cross correlation for the score, for example. But taking so long to register is not good for an end user sort of thing.

(2) sounds interesting but I would start by experimenting with different modalities first. Maybe you can discover what approach is best for each pair of modalities, and then hard-code the choice of registration algorithm for each pair. If while doing this you feel that the process of discovery can be automated, then automate it once for yourself and hard-code the choice into your module. Having a system “learn” the best parameters while it is being used in a hospital environment sounds to me like it introduces too many unnecessary failure points as well as makes it a less consistent and more difficult to understand system.

thanks erahim,thank you for you reply.
Your ideas are important to me , I will seriously think about them.
you have mentioned [normalized cross correlation] above, is there a vtk filter or extensions to do this algorithm in Slicer?

Maybe someone else can comment if there is a vtk filter, I don’t know

But it’s pretty quick to implement in python using numpy for example:

import numpy as np

# create some random 3D image arrays with a bit of correlation to them
im1 = np.random.randn(50,50,50)
im2 = im1+np.random.randn(50,50,50)

# compute their NCC
mu1 = im1.mean() # means
mu2 = im2.mean()
alpha1 = (im1**2).mean() # second moments
alpha2 = (im2**2).mean()
alpha12 = (im1*im2).mean() # cross term
numerator = alpha12 - mu1*mu2
denominator = np.sqrt((alpha1 - mu1**2) * (alpha2-mu2**2))
ncc = numerator / denominator # be careful of division by zero!
1 Like

thanks,you help me a lot.
your code is awesome.

1 Like

dear ebrahim:
i have integraged your code into the slicer enviroment , and find a problem . the value [denominator] is NAN,the value in [np.sqrt] is negative,should i force the result to positive?
the code i have written pasted below

Hmm it should not be possible for the thing under the square root to be negative. This is because both alpha1-mu1**2 and alpha2-mu2**2 are necessarily nonnegative:


Here for example x_1, ..., x_N stand for the image pixel/voxel values of im1 and the greek letters mu and alpha are standing in for mu1 and alpha1. As long as all image values are real (i.e. have no imaginary component) it should not be possible to get something negative.

The denominator contains the variances in the image values, so it can be zero (when an image is all one value, constant over space). Then you’d get a division by zero, but maybe there’s some floating point error that pushes it to the negative sometimes? If that is what’s happening then you can add a little smoothing factor to the denominator. It takes you a slightly away from true ncc but makes your calculation robust to images with vary small variance. So instead of ncc=numerator / denominator you can try ncc = numerator / (denominator + 0.001) or something like that.

Please verify first that when you get denominator<0 it’s a very small negative number. If it’s not small then something is wrong and please let me know! I use this NCC calculation for a few things and would like to fix it if something is wrong. Thanks!

1 Like

thank you so much for reply.
i paste the result i caculate from the code above

the value [alpha2-mu2**2] is negative

Hmm this is puzzling

It should not be possible that (im2**2).mean() is 545 while im2.mean() is 40.

The math above shows that alpha2-mu2**2 equals ((im2-im2.mean())**2).mean() and hence should not be negative.

Can you inspect ((im2-im2.mean())**2).mean() and ((im2-mu2)**2).mean() and alpha2-mu2**2 … these should all be the same number.

Can you print out the image shapes; maybe there is some broadcasting that we are missing.

of course , i paste the result below

and the image shapes?im1.shape, im2.shape

it’s so strange that
is coming out different from
(im2**2).mean() - im2.mean()**2

yes,it’s different.the shape information i pasted below

When I try on a random array it does work out:

import numpy as np
im2 = np.random.rand(208,256,240)

# these two come out the same modulo float arithmetic errors
print((im2**2).mean() - im2.mean()**2)

Maybe there is a problem in your definition of alpha2? Can you try printing those two quantities exactly?

print((im2**2).mean() - im2.mean()**2)

if they are different then there’s something we are fundamentally missing about how the arrays being handled. if they are the same then your definition of alpha2 or mu2 was messed up

the result is different

but with a random array like I showed in the code snippet above, those two numbers come out the same for you?

how about im1? are those numbers the same for you when you try with im1?

the result is different for m1,too