How to convert kernel to matrix notation?

412 views Asked by At

I'm trying to understand the bicubic convolution algorithm and haven't been able to understand how the kernel given as a piece wide function,

kernel given as a piecewise function

is turned into this matrix:

this matrix

I understand to arrive at the matrix a was set to -0.5. No matter how I look at it I can't arrive at the non-symmetric matrix shown.

I've looked through the paper by Keys, but he does not expand into matrix notation and I've struggled with how to get there.

Any insight would be much appreciated.

2

There are 2 answers

0
K Sco On BEST ANSWER

I've found and understood where Keys describes the process. You can follow along from top to bottom in the image below, but the most important bit to note is Equation 7. Keys interpolation kernel

All of the values within the matrix come from the coefficients of the c-terms. The first row of the matrix corresponds to the coefficients of the constant terms, and the first column corresponds to the c_j-1 terms. This can be seen by comparing the figure below to Equation 7's coefficients: matrix explanation

I was able to use this understanding to implement the cubic convolution method to interpolate a surface for which I was able to tune the value of a in order to see the response. I'm happy to help expand on this if anything is unclear!

0
Cris Luengo On

Step 1 to see the relation is to multiply the function W(x) with the sampled input data f[n] for a given shift t. This gives 5 weights multiplying to 5 input samples, and added together to form an output sample p(t).

The matrix used to compute p(t) is not symmetric because, for any shift t that is not 0, the weights applied to the samples are not symmetric either. You can see this by writing out W(t+i), which are the weights applied to the 5 samples around the output position t (i in [-2,2]).