Calculating the blur kernel between 2 images

2.7k views Asked by At

Unlike the standard (and more challenging) de-blurring and super resolution scenarios, I have access to both the original (sharp) image G and it's blurred version B. I'm simply looking for the blur kernel h. So because B is taken using a real camera the relation is:

B=G*h+N     (where * denotes convolution and N is some additive noise)

Naturally, this is an over-constrained problem since h is small in size compared to G and B and so every few pixels in the pair of images generate an equation on the entries of h.

But what would be the simplest way to actually implement this? My thoughts so far:

  • Moving to the frequency domain and doing division (as this answer suggests). But this would be inevitably numerically unstable because of the noise right?
  • Cross-correlation - I've only found examples for 1-D signals and couldn't figure out how to use in the 2D case of images.
  • Carefully constructing an over-constrained linear system G'h'=B' looking for h' which is a vector version of the entries of the kernel h using some optimization procedure. But this is very tedious and the matrix G' and vector B' are bound to be huge in size.

A concrete example in any programming language from C++ to MATLAB would be extremely useful.

1

There are 1 answers

1
stav On BEST ANSWER

Using @Shai's relevant suggestion, I can now give the detailed answer.

Options 2 and 3 that I suggested are in fact the same and are apparently the right way to go. It is also what was done in the E-step of the suggested paper linked by @Shai. Posing the over constrained problem is in fact quite simple.

To correctly pose these equations we use the fact that the dot product of every block the size of the kernel centered around some pixel in G with the 180 degrees rotated version of h should equal the corresponding pixel in B. This is directly derived from the fact that B and G are related by convolution and so blocks in G relate to pixels in B by cross-correlation (and hence the 180-degree rotation).

The MATLAB code now becomes:

%inputs: B,G - gray level blurred and sharp images respectively (double)
%        szKer - 2 element vector specifying the size of the required kernel
%outputs: mKer - the recovered kernel, 
%         imBsynth - the sharp image convolved with the recovered kernel
%
%example usage:  mKer = calcKer(B, G, [11 11]);

function [mKer, imBsynth] = calcKer(B, G, szKer)

  %get the "valid" pixels from B (i.e. those that do not depend 
  %on zero-padding or a circular assumption
  imBvalid = B(ceil(szKer(1)/2):end-floor(szKer(1)/2), ...
      ceil(szKer(2)/2):end-floor(szKer(2)/2));

  %get a matrix where each row corresponds to a block from G 
  %the size of the kernel
  mGconv = im2col(G, szKer, 'sliding')';

  %solve the over-constrained system using MATLAB's backslash
  %to get a vector version of the cross-correlation kernel
  vXcorrKer = mGconv \ imBvalid(:);

  %reshape and rotate 180 degrees to get the convolution kernel
  mKer = rot90(reshape(vXcorrKer, szKer), 2);

  if (nargout > 1)
      %if there is indeed a convolution relationship between B and G
      %the following will result in an image similar to B
      imBsynth = conv2(G, mKer, 'valid');
  end

end

I also found that some constraints over the solution may be necessary for practical scenarios. Examples are enforcing the kernel to be positive, smooth, or symmetric. The exact method of incorporating these is out of the scope of this question but will usually come in the shape of a linear constraint or a regularization element when solving for vXcorrKer.