How to create two segmentations that are the exact same size?

Operating system: Windows 10
Slicer version: 4.11.20210226
Expected behavior:

I am working on a radiomics project that utilizes a sphere-shaped region as the input region. The goal of the study is to determine whether a region of constant shape and size can provide useful radiomic information. I am creating the spheres using a script, modified from the bounding box script provided in the Slicer script repository (see attached code clipping).

segmentationNode = slicer.mrmlScene.GetFirstNodeByClass('vtkMRMLSegmentationNode')
nodeLabel = segmentationNode.GetName()

# Compute statistics
import SegmentStatistics
segStatLogic = SegmentStatistics.SegmentStatisticsLogic()
segStatLogic.getParameterNode().SetParameter("Segmentation", segmentationNode.GetID())
segStatLogic.getParameterNode().SetParameter("LabelmapSegmentStatisticsPlugin.obb_origin_ras.enabled",str(True))
segStatLogic.getParameterNode().SetParameter("LabelmapSegmentStatisticsPlugin.obb_diameter_mm.enabled",str(True))
segStatLogic.getParameterNode().SetParameter("LabelmapSegmentStatisticsPlugin.obb_direction_ras_x.enabled",str(True))
segStatLogic.getParameterNode().SetParameter("LabelmapSegmentStatisticsPlugin.obb_direction_ras_y.enabled",str(True))
segStatLogic.getParameterNode().SetParameter("LabelmapSegmentStatisticsPlugin.obb_direction_ras_z.enabled",str(True))
segStatLogic.computeStatistics()
stats = segStatLogic.getStatistics()

# Create segmentation node
seg = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLSegmentationNode")
seg.SetName(nodeLabel + "sphere")

# Draw ROI for each sphere
import numpy as np
for segmentId in stats["SegmentIDs"]:
  # Get statistics
  obb_origin_ras = np.array(stats[segmentId,"LabelmapSegmentStatisticsPlugin.obb_origin_ras"])
  obb_diameter_mm = np.array(stats[segmentId,"LabelmapSegmentStatisticsPlugin.obb_diameter_mm"])
  obb_direction_ras_x = np.array(stats[segmentId,"LabelmapSegmentStatisticsPlugin.obb_direction_ras_x"])
  obb_direction_ras_y = np.array(stats[segmentId,"LabelmapSegmentStatisticsPlugin.obb_direction_ras_y"])
  obb_direction_ras_z = np.array(stats[segmentId,"LabelmapSegmentStatisticsPlugin.obb_direction_ras_z"])
  
  # Create sphere
  sphere = vtk.vtkSphereSource()
  sphere.SetRadius(2.5)
  sphere.SetCenter(0.0, 0.0, 0.0)

  sphereNode = slicer.modules.models.logic().AddModel(sphere.GetOutputPort())

  # Position and orient ROI using a transform
  obb_center_ras = obb_origin_ras+0.5*(obb_diameter_mm[0] * obb_direction_ras_x + obb_diameter_mm[1] * obb_direction_ras_y + obb_diameter_mm[2] * obb_direction_ras_z)
  boundingSphereToRasTransform = np.row_stack((np.column_stack((obb_direction_ras_x, obb_direction_ras_y, obb_direction_ras_z, obb_center_ras)), (0, 0, 0, 1)))
  boundingSphereToRasTransformMatrix = slicer.util.vtkMatrixFromArray(boundingSphereToRasTransform)
  transformNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLTransformNode")
  transformNode.SetAndObserveMatrixTransformToParent(boundingSphereToRasTransformMatrix)
  sphereNode.SetAndObserveTransformNodeID(transformNode.GetID())
  segment = segmentationNode.GetSegmentation().GetSegment(segmentId)
  if segment.GetName() == "Segment_1":
    sphereNode.SetName("pos")
  elif segment.GetName() == "Segment_2":
    sphereNode.SetName("neg")
  else:
    sphereNode.SetName("error")
    
  # Convert model to segmentation
  slicer.modules.segmentations.logic().ImportModelToSegmentationNode(sphereNode, seg)
  slicer.mrmlScene.RemoveNode(sphereNode)
  slicer.mrmlScene.RemoveNode(transformNode)

segto = seg.GetSegmentation()
segto.GetNthSegment(0).SetColor(178/255, 212/255, 242/255)
seg.CreateClosedSurfaceRepresentation()

Actual behavior:

However, when I run radiomic feature extraction, the results show that the segmentations differ in size, sometimes by a large amount. Below is an example extraction on one of the Sample Datasets (MRBrainTumor1), showing that the number of voxels differ between the two spheres, as well as the LeastAxisLength and MajorAxisLength. I have tried a few workarounds for this, such as trying to place the sphere on the intersection point of 4 voxels, but I have been unable to find information on how the RAS coordinate system and global voxel coordinate system relate to each other.

Any advice on how to create two geometrically congruent segmentations on different parts of an image? Any suggestions/workarounds are appreciated. Thank you!

Segmentation (script) results:

Extraction settings:

image

Radiomic feature results:

image
image

Without having the time to deeply look at your question (reading the core etc): would spatial registration of the inputs solve the problem? If not, look into setting the same reference geometry to the segmentation nodes.

Hi Csaba,

Thanks for taking the time to respond. I’m interested in trying your proposed solutions, but I’m not sure where to start. How would I spatially register the inputs and/or set the reference geometry of the segmentation nodes? I assume that the nodes are currently both registered to the global coordinate system.

Matthew