Get a leftmost/rightmost coordinate with SimpleITK

Operating system: windows
Slicer version: 4.11.20200930

I create a labelmap volume node and read it as a SimpleITKimage.
Now I show this labelmap volume in red, green and yellow slice window.
I want to know how to get a leftmost/rightmost coordinate in slice window.
Since I’m not familiar with SimpleITK, I have no idea about the problem.
Thank you.

Here is my code:
labelmapVolumeNode = slicer.mrmlScene.AddNewNodeByClass(“vtkMRMLLabelMapVolumeNode”)
labelmapVolumeNode.SetName(“Test_volume”)
slicer.modules.segmentations.logic().ExportAllSegmentsToLabelmapNode(segmentationNode, labelmapVolumeNode, slicer.vtkSegmentation.EXTENT_REFERENCE_GEOMETRY)

test_volume = slicer.util.getNode(“Test_volume”)
inputImage = sitkUtils.PullVolumeFromSlicer(test_volume)

You can use GetBounds method of the volume node to get physical bounds (in RAS coordinate system).

Note that this is not very useful (gives just a rough approximation) in general, because the volume axes can be in arbitrary order and direction in physical space. If you want your code to be always accurate then you can use the IJKToRAS transform (that specifies origin, spacing, and IJK axis directions) and the image extent. You can get them by calling GetIJKToRAS method of the volume node and GetExtent method of the image data.

I’m not sure whether I complete the IJKToRAS transform.
However, I have the error says that “‘Image’ object has no attribute ‘GetExtent’”.
Thanks a lot.

I import SimpleITK as below:

import SimpleITK as sitk
import sitkUtils

Here is my code now:

test_volume = slicer.util.getNode(“Test_volume”)
direction = vtk.vtkMatrix4x4()
test_volume.GetRASToIJKMatrix(direction)
test_volume.ApplyTransformMatrix(direction)
image = sitkUtils.PullVolumeFromSlicer(test_volume)
dims = image.GetExtent()

Now I fixed the error, but I don’t understand the function output.
From image.GetSize(), I get (283, 206, 265).
From vimage.GetExtent(), I get (0, 282, 0, 205, 0, 264).
I’m not sure how to get the coordinate from them.
Thank you.

Here is my code now:
test_volume = slicer.util.getNode(“Test_volume”)
direction = vtk.vtkMatrix4x4()
test_volume.GetRASToIJKMatrix(direction)
test_volume.ApplyTransformMatrix(direction)
image = sitkUtils.PullVolumeFromSlicer(test_volume)
direction = image.GetDirection()
print(image.GetSize())
vimage = test_volume.GetImageData()
dims = vimage.GetExtent()
print(dims)

The coordinates I want to get is where red arrow points to.
image

If you are interested in the bounding box of the content of a segment then you can get that directly from the segmentation object (in voxel coordinates). You can get oriented bounding box from Segment Statistics module.

There are so many kind of bounding boxes (aligned with image axes, patient axes, principal axes, oriented bounding box; potentially transformed using linear or non-linear transforms; in voxel or physical coordinate system) and so many ways to get them that it would help if you could write a bit about what would you like to use this for and how, because it would just take too much time to describe all the possibilities.

1 Like

My goal is to receive the volume of red and blue area. Here is my algorithm.

  1. Get the leftmost and rightmost coordinate in red slice window.
  2. Go through the image, and fill the hole by replacing 0 with 1, then the blue area can be created.

My problems are:

  • Because red and black area are two volume, I also wonder that if is possible to combine them into one labelmap volume. Since the the slice window can only show one labelmap in one time.
  • map and visualize the point in 2d slice into 3d view. I used TransformIndexToPhysicalPoint() and fiducial node before, but the position was weird.
# It doesn't look like situated at the corner.
index= (image.GetSize()[0], image.GetSize()[1], image.GetSize()[2])
image.TransformIndexToPhysicalPoint(index)

I will explain my problem in detail below.

Now I’m trying to find the coordinate of green line in the red slice window. Just like the picture below. I want to go through every slice in red slice window and find the coordinates.


I first cut the image into 2d named target image for testing.
Once I touch a point that value is not zero, I break the loop.
I want to know how to check this point’s coordinate in 3d view, in order to check this code works. Then I can modify the parameters to find the coordinates of green line.
Thank you.

Here is my code:

image = sitkUtils.PullVolumeFromSlicer(test_volume)
direction = image.GetDirection()
image = sitk.Flip(image, [direction[0] < 0, direction[4] < 0, direction[8] < 0])
image_shape = image.GetSize()
target_img = image[0:image_shape[0]-1, 0:image_shape[1]-1, 0]
array = sitk.GetArrayFromImage(target_img)

for j in range(image_shape[0]-1):
    for k in range(image_shape[1]-1):
        if(array[k][j]>0):
            # I want to know how to map this point to 3d view.
            print(k,j,array[k][j])
            break

You cannot implement such low-level image processing in Python. They would be way too slow. For example to iterate through a tiny 100x100x100 image you would do 1 million iterations. In C++ or other low-level languages this would take a fraction of a second, while in Python in can take minutes.

Instead, you need to think about how you can implement a feature using high-level operations. For example, you can do the projection that you described above by copying the entire shape shifted by one pixel, repeated 100 times, and it would be magnitudes faster than implement iterating through each pixel of the image.

I guess you are still working on this:

I would recommend to have a look at the existing Slicer module for almost the same task (GitHub - PerkLab/BreastReconstruction: Tool for breast reconstruction surgery planning) because we already have this kind of projection implemented in it.

Ok! I understand. I’ll reconsider my project. Thanks for your patience.