About screen coordinates

Hello. Through the screenshot function of 3D slicer, I extracted an image from the green view. The shape of the extracted image is 634*441. The extracted image is displayed by opencv.
image

The shape of the volume input to the 3D slicer is 240×240×208. Now I place markers in the green view of the 3D slicer as shown. And the IJK coordinates of the marker point is (134, 120, 143)
Screenshot

The problem is: When I convert the size and scale, the position of the marker points I draw in opencv is not the same as the position marked in the 3D slicer. The position of the marker point drawn in opencv is shown in the figure.
image

My coordinate conversion process is as follows:
x = 634-(634×134/240)
y = 441-(441×163/208)
Is there something wrong with my coordinate conversion? I feel that there is no problem with the x coordinate, but there is a problem with the y coordinate. How can I fix this my problem.

And I found that the out of frame prompt appeared when I placed the marker F-3, what is the reason for this?

Taking a screenshot means that the pixels have been scaled and translated by the view parameters. You can learn the transformation back to source pixel space (see how it is done in the DataProbe source code) but it’s going to be awkward. You’d be better off working with the image data as numpy arrays directly. See the script repository for examples.