Hi everyone, I’m working on an image registration-related project. I want to evaluate its performance between different 3-4 final registration results (fused image) regarding readability. The approach I’m taking is to
segmentation of the fused images
calculate the DICE and Hausdorff between segmentation_fused_1 and segmentation_fused_2
In general evaluation of registration algorithms require some sort of an independent metric. For example if you have manual segmentations of the same structure in two images, you can register them, apply the registration to segmentation and then calculate how similar the registered segmentation is to the manual one (DICE coefficient like you said).
You can also do this with landmarks or distance measurements…
@muratmaga sorry I should be more specific. I’m working with co-registration with two image modalities. CT and OCT. CT has a bigger FOV and OCT has a smaller FOV.
And the fused image here refers to a composite image of OCT + CT.
OK. But the idea still applies. You need to have a some kind of independent way of assessing the outcome of the registration. You can’t do that by looking at the DICE or hausdorff distance between two sets of fused images.
Maybe landmark a few points that exists both in OCT and CT images independently (in original images) and then see what the error in landmark placement in the fused image. Then you can choose the registration protocol that minimizes this error. Might be sufficient for a first-order pass.