How to apply different kernels to different areas of an image using deconvolution process?

Hi Dear developers and users

I have a image that has degraded its quality due to blurring. The blurring process can be modelled as degradation function. The amount of blurring (degradation function) varies in different areas of the image.
Also suppose that I have knowledge about these degradation functions (kernels) as following:

For pixel value equal to 10 → kernel1
For pixel value equal to 20 → kernel2
For pixel value equal to 30 → kernel3

For pixel value equal to 100 → kernel10

These kernels are 20x20 pixels in size. Now, I want to apply these kernels to the image depending on the pixel values of the image through deconvolution process. The kernel is selected depending on the pixel value of the image. For pixel values of an image whose kernels are not available, I can interpolate them using existing kernels. I expect that this process improves the quality of the image.
I know there are some deconvolution filters in “itk SimpleFilters” module, but I think that all these filters apply a specific kernel over the entire image (unlike my case that I want to apply different kernels to different areas of image).

At now, my main problem is that how can I apply these kernels on different areas based on pixel values of image?
What is the solution to do this?

Please guide me.
Thanks a lot.

I don’t know of existing code that does just what you describe, but it sounds like you could implement that pretty easily (sounds like you know exactly what you need it to do). If you aren’t concerned about speed, probably the easiest is to do it in python by accessing the image data as a numpy array.

If performance is an issue, then you can write C++ code for it, but that’ll require compiling and adds complexity.

You can also perform the filtering with 10 different kernels on the entire image, then use numpy array operations to combine the 10 processed image into 1 (see a simple example here).

Thank you very much for your guidance and links you sent to me.

Andras pointed out that I perform the filtering with 10 different kernels on the entire image and then combine 10 processed image to 1.

Can this be equivalent to getting an average of ten kernels and then filtering with the resulting kernel (averaged kernel) on the entire image? I’m not really sure that this is true in my case.

I think the result of this proposed solution (by Andras) is different with what I apply different kernels in different areas of the image.

Please guide me further.

You don’t average kernels but instead apply the candidate kernels one by one on the original image. Then in the final image you choose voxel value from one of the processed images, based on what was the intensity in the original image.

For certain kernel types (e.g., Gaussian smoothing) you can get magnitudes faster speeds and very similar results if you use an image pyramid (you apply filtering on the result of previous filtering).