I am trying to detect edges in a volume and therefore, I use OpenCV. After converting my slice to a numpy array and detecting the edges, I am able to display it in my Jupyter notebook. Here is the result I get:
However, when I try to display this in 3D slicer, the image gets displayed on 2 different slices and the orientation isn’t right. Here’s what I mean:
I’m not really sure where I went wrong as I have tried to change the dimensions in multiple different ways but nothing seems to be working. Is there something that I missed to be able to fix this issue?
OpenCV is for computer vision. It is very good for image processing for computer vision but not well suited for medical image processing. For example, most OpenCV functions operate in pixel space (instead of physical space), cannot deal with arbitrarily oriented images (ignores IJKtoRAS), many functions don’t work on 3D images, and in general its developers did not think about medical image processing needs when they implemented the library.
Most likely you will have much simpler life and get better results if you use ITK instead. You can either use SimpleITK or ITK-Python. There is a simple example of using SimpleITK in Slicer here.
My issue with SimpleITK is tha whenever I try to execute the CannyEdgeDetectionImageFilter(), I get an error that I don’t quite understand and I don’t know where it’s coming from. Here is my error:
RuntimeError: Exception thrown in SimpleITK CannyEdgeDetectionImageFilter_Execute: sitk::ERROR: Pixel type: 16-bit unsigned integer is not supported in 3D by class slicer_itk::simple::CannyEdgeDetectionImageFilter.
It seems that the scalar type of your image is not supported byu the Canny filter. You can cast it to 8 bits if it does not mean too much data loss for you.