Final value of registration cost function metric

Hi Slicer Forum,

Is there a way to get the final value of the cost function (default being MattesMutualInformation, I believe) when registering volumes from Jupyter?

For a set of CTs, we are looping through a ton of MRIs to see which of those MRIs, when registered, best fits each CT. Obtaining the final value that the registration algorithm optimized for each combination would be the best solution.

Here is the core registration code:
Is the final value of the cost function stored in any of the objects in this script?

    time_1_node = slicer.util.loadVolume(filename1)
    time_2_node = slicer.util.loadVolume(filename2)
    output_node = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLScalarVolumeNode")
    transform_node = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLTransformNode")
    
    parameters = {}
    parameters["fixedVolume"] = time_1_node
    parameters["movingVolume"] = time_2_node
    parameters['outputVolume'] = output_node
    parameters['outputTransform'] = transform_node
    parameters['initializeTransformMode'] = "useCenterOfHeadAlign"
    parameters['useRigid'] = True
        
    cliNode = slicer.cli.runSync(slicer.modules.brainsfit, None, parameters)
    
    myStorageNode2 = transform_node.CreateDefaultStorageNode()
    transform_filename = "output_transform.h5"
    myStorageNode2.SetFileName(transform_filename)
    myStorageNode2.WriteData(transform_node)
    
    myStorageNode = output_node.CreateDefaultStorageNode()
    output_filename = newFileString
    myStorageNode.SetFileName(output_filename)
    myStorageNode.WriteData(output_node)

Similarity metric value is usually not be comparable between different images. Better metric value for the same image pair should indicate that the images are better aligned. However, metric value between two different image pairs, for example moving1&fixed and moving2&fixed, does not tell if fixed image is more similar (or better aligned) with moving1 or moving2 image.

Anyway, if you want to get the final metric value then you can specify a log file name for BRAINSfit, which will contain the final metric value:

# Specify a filename for the metric table
logFile = slicer.app.temporaryPath+"/brainsfitlog.csv"
cliNode.SetParameterAsString("logFileReport", logFile)

# ...run the registration

# Load the final metric value from the last row of the metric table
metricTable = slicer.util.loadTable(logFile)
finalMetricValue = float(metricTable.GetCellText(metricTable.GetNumberOfRows()-1, metricTable.GetColumnIndex('MetricValue')))
slicer.mrmlScene.RemoveNode(metricTable) # we don't need the metric table anymore
print(finalMetricValue)

However, BRAINSfit module always returns a hardcoded placeholder value (123.456789) - see implementation here.

@hjmjohnson do you know why his feature got disabled? Could you re-enable it?

Hey @lassoan, thanks for educating us re: metrics! In that case, is there a better value/calculation to determine which of [<moving1>, <moving2>, <moving3>, ..., <movingN>] is best aligned with <fixed>?

Thanks as well for the sample code - we’ve implemented it and see what you mean by the hardcoded placeholder value. We can try to code a patch but let us know if there is a source update for that feature!

There are some similarity metrics that are used for intensity based image registration that might be usable for this. You just need to be careful in which metric you choose, because both intensity difference and spatial misalignment may result in worse metric value. You may also want to use a smaller region than what was used for registration (for registration a larger region may help in making the method more robust at the expense of somewhat decreasing accuracy). Normalizing the intensity of the images in that region before computing the metric might help, too, with making evaluation more consistent across moving images.

If you want to evaluate best fit only in term of minimal residual error in spatial alignment then you may use metrics that are not commonly used for automatic intensity-based image registration (because they are hard to implement). For example, you could implement a method that detects landmarks and compute distance between correpsonding landmarks in the fixed and moving images. Or you can segment relevant structures and compute Hausdorff distance between them.

You may also ask advice from registration experts at the ITK forum.