How to get the RAS coordinates of the pixel in a slice view by using python code

Hello,I am using the SlicerHeart module.There is a valve view function there.It can rotate the slice view(‘red’,‘green’,‘yellow’) to any angle.
My goal is as followed:I want to add fiducial in the rotating slice view in a pixel(the position of the pixel will be showed as numpy array) by programming.
My question is as followed:How can I get the RAS coordinates of the pixel in any angle slice view by python code.Or is there other way to achieve my goal.

What would you like to achieve? Segment annulus curve? Leaflets?

Yes.I want to use deep learning method to identify the mitral annulus point in each slice view.I haven’t finished that.Maybe I think it is possible,although the accuracy is low.The problem is if I know the location of the point in a slice view(the location will be shown as [posx,posy] in the numpy array), how can I get the RAS coordinate of this point to add the fiducial automatically.Thanks for your reply.

Markup point positions are defined in RAS coordinates, so you are all good. You can get control point positions using slicer.util.arrayFromMarkupsControlPoints or curve points using slicer.util.arrayFromMarkupsCurvePoints.

Note that defining the annulus curve takes about 1-2 minutes with the current module and it takes about 10-20 seconds if you use a more optimized module (we will publicly release such a module soon). Once you have the annulus contour in one frame, you can track it along the cardiac cycle using Sequence Registration. So, annulus contouring and tracking has a quite good solution already with traditional approaches. I would recommend to use deep learning for more challenging problems, such as leaflet segmentation.

May be something wrong.I try explaining as followed:The deep learning method can give the location of the mitral annulus point in an image.So I want to add fiducial in the corresponding position.My thought is to use the ScriptRepository examples( Get reformatted image from a slice viewer as numpy array ) to get the slice view as numpy array and use the deep learning result model to identify it.After that, I add fiducial in the location of the corresponding pixel automatically by python.How can I know the RAS coordinate of this pixel.Is it possible?Or there is another way to achieve that?

Also,I am trying using deep learning for leaflet segmentation based on 3D slicer.But all things are just in the beginning.So if I have other problems I will post here.

Thanks for your help and this discussion forum.

I have achieve my goal according to the ScriptRepository examples.For example, add the fiducial in the middle position of green slice view code is as followed:

import numpy as np
sliceNodeID = ‘vtkMRMLSliceNodeGreen’
markupsNode = getNode(‘F’)
#Get image data from slice view
sliceNode = slicer.mrmlScene.GetNodeByID(sliceNodeID)
appLogic =
sliceLogic = appLogic.GetSliceLogic(sliceNode)
sliceLayerLogic = sliceLogic.GetBackgroundLayer()
reslice = sliceLayerLogic.GetReslice()
reslicedImage = vtk.vtkImageData()
#Create new volume node using resliced image
volumeNode = slicer.mrmlScene.AddNewNodeByClass(“vtkMRMLScalarVolumeNode”)
#Get voxels as a numpy array
volumeArray = slicer.util.arrayFromVolume(volumeNode)
#the middle pixel position
point_Ijk = [voxels.shape[1]/2, voxels.shape[2]/2, 0]
#Get physical coordinates from voxel coordinates
volumeIjkToRas = vtk.vtkMatrix4x4()
point_VolumeRas = [0, 0, 0, 1]
volumeIjkToRas.MultiplyPoint(np.append(point_Ijk,1.0), point_VolumeRas)
#If volume node is transformed, apply that transform to get volume’s RAS coordinates
transformVolumeRasToRas = vtk.vtkGeneralTransform()
slicer.vtkMRMLTransformNode.GetTransformBetweenNodes(volumeNode.GetParentTransformNode(), None, transformVolumeRasToRas)
point_Ras = transformVolumeRasToRas.TransformPoint(point_VolumeRas[0:3])
#Add a markup at the computed position and print its coordinates
markupsNode.AddFiducial(point_Ras[0], point_Ras[1], point_Ras[2], “test”)