How to find a default colorbar unit or calibrate in mm

Hi,

I have been able to colormap surface difference with ranges from -5 to 0 then differences from model 1 to model 2 ranging from 3mm to 9mm so I am wondering if the color bar ranges or values are based on mm or whatever that is set by an user.

It will be nice to know units or ways to calibrate it.

Thanks,

Length unit is in millimeters by default. So, unless you have changed it in application settings / units (which is very unlikely), the values that you see are in millimeters.

Thanks!
If so, why is it not depicting the difference between -9 and 0, since my samples vary from 0 to 9mm?
It seems to show in color if I set the range from -5 to 0 not -9 to 0.
Do you have any suggestion how to set range?
Best,

The sign indicates if the closest point on one model is inside or outside the other model. If you are not interested in the sign then you can choose absolute_closest_point in Model to model distance module.

The changes are inside, so the sign helps, but the signed ranges are between -5 to 0.9 when the real changes range from -9 to 0. It only makes sense to double the colorbar value then it makes up for the difference(change). Is there a way to make it 1:1 ratio? Or does that mean that anything deeper than -5 is not picked up by the colormap? Is there a way to change the setting?
Thanks

By default the color legend is set to the data range, so if it is set to -5 to 0.9 then it means that the maximum distance found was 5mm. Note that closest distance measurement is not symmetric, so if you get an unexpected result it might be due to that you switched up the source and target models.

Make sure you use the latest Slicer Preview Release. It should look something like this:

If you want to change the displayed range then change Data scalar range (auto) to Manual and set the range that you prefer.

I downloaded the preview release then changed the displayed range up to -8 which is the end of range then I ran predefect model as target and registered model of both pre and post as source. I still get the same result. Also, shapepopulation viewer was unstable. What else can I try now? Thanks

Screen Shot 2022-03-15 at 10.44.12 PM

The displayed range should from -9 to 0 but the data suggests -6 to 0.9 so I don’t know what I need to do. If I leave it auto then iit ranges from -5 to 0.9.

Also, Shapepopulationviiewer is not found to install as an extension on the newest preview version.

Thanks,

If you set Scalar Range ModeData scalar range (auto) then the displayed range should be the same as the data range. If you find that the selected scalar’s range is not the same as the displayed data range then let us know.

Thanks for reporting, we are aware of this issue. The problem is expected to be fixed within a few days. You can track the progress here:

It was in Auto then you told me to do it manually on the preview release. My data ranges from -9 to 0 but displayed range runs from -5 (auto) to 0.9/ -6(manual) to 0.9.

So I am letting you know. Is there other version that has a way to set the range manually with a working shapepopulationviewer?
Thanks

How did you determine that your data ranges from -9 to 0?

-9 to 0 means the difference betweeen model 1 (predefect) model 2 (post defect)

Are you referring to something else when you said data range?

How did you get the -9 and 0 numbers: did you save the .vtk/.vtp file and looked into the file, or loaded it into ParaView, typed a Python command to in Slicer, …?

none of that but my gold standards range from -9 to 0 which I believe it was depicted in the dicom files. I also measured in dicom format and measured it to be -9 (inside of pre model) tto 0.

Maybe the “gold standard” data set computes something different. For example, distance between corresponding points in two meshes is entirely different than distance between closest surface points. How the that gold standard range was computed? Do you know where the 9mm distance was measured in the gold standard data set?

Maybe the “gold standard” data set computes something different. For example, distance between corresponding points in two meshes is entirely different than distance between closest surface points: How do I check this?

How the that gold standard range was computed? It was created to be exact on model 2 which is same model 1 with defects.

Yes, it was the floor of defect from the crestal bone around a given tooth.

Probably you did not get the answer you expected because the closest point on surface is searched in every directions. For example, if you simulated a cylinder-shaped hole then closest distance will not give you the depth of the hole but the maximum distance will be the radius of the cylinder. Closest distance is also not symmetric. The result depends on which surface you choose as source and target.

If you want to quantify bone loss then it may be easier to segment the bone on the two CBCTs and subtract the one with bone loss from the other. You can then use Segment Statistics module to get robust metrics, such as volume and oriented bounding box size.

Colormap did depict the change in right color scheme but the deepest end, which is -9, shows as -5 with auto, -6 with manual. Is thatt all I can get?

Also, regarding the bone loss, please elaborate steps in details for me.

  1. How to substract
  2. how to get robust metrics
  3. how to get actual measurements
    Thanks

If it the automatic scalar range is up to -5 then it means that the maximum distance in the entire mesh is 5mm.

If you want to understand how this was computed then for example you can check what the value is at a specific mesh position using this code snippet.

Or you can copy-paste this code snippet to visualize a selected point on the output mesh, closest distance at that point, and the closest point position in the 3D view:

modelNode = getNode('VTK Output File')

modelPointValues = modelNode.GetPolyData().GetPointData().GetArray("PointToPointVector")
pointListNode = slicer.mrmlScene.GetFirstNodeByName("F")

if not pointListNode:
  pointListNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLMarkupsFiducialNode","F")

while pointListNode.GetNumberOfControlPoints() < 2:
  pointListNode.AddControlPoint(0,0,0)

pointsLocator = vtk.vtkPointLocator() # could try using vtk.vtkStaticPointLocator() if need to optimize
pointsLocator.SetDataSet(modelNode.GetPolyData())
pointsLocator.BuildLocator()

import numpy as np

def onMouseMoved(observer,eventid):
  ras=[0,0,0]
  crosshairNode.GetCursorPositionRAS(ras)
  closestPointId = pointsLocator.FindClosestPoint(ras)
  ras = modelNode.GetPolyData().GetPoint(closestPointId)
  closestPointValue = modelPointValues.GetTuple(closestPointId)
  closestPointPositionOnOtherMesh = [ras[0]-closestPointValue[0], ras[1]-closestPointValue[1], ras[2]+closestPointValue[2]]
  distance = np.linalg.norm(np.array(ras)-np.array(closestPointPositionOnOtherMesh))
  print(f"distance = {distance:.1f}   point1 = {ras}   point2 = {closestPointPositionOnOtherMesh}")
  pointListNode.SetNthControlPointPosition(0, ras)
  pointListNode.SetNthControlPointPosition(1, closestPointPositionOnOtherMesh)

crosshairNode=slicer.util.getNode("Crosshair")
observationId = crosshairNode.AddObserver(slicer.vtkMRMLCrosshairNode.CursorPositionModifiedEvent, onMouseMoved)

# To stop printing of values run this:
# crosshairNode.RemoveObserver(observationId)

This page is a good starting point for learning about image segmentation. You can subtract two segments by using Logical operators effect. You can get segmentation metrics (volume, bounding box size, etc.) using Segment statistics module.