Consider the following code:
x = np.array([1, 5, 6, 10]) # an unstructured coordinate
f = x**2 # function value on the points x
grad1 = np.gradient(f, x) # df/dx
grad2 = np.gradient(f) / np.gradient(x) # df/di * di/dx = df/dx
I would have expected that, by chain rule, grad1=grad2. The i in the comment above is simply a uniform "index"After testing, this equality is true for simple linear functions, but not e.g. x**2 as shown above. I'm now wondering if there is a theoretical reason why the chain rule shouldn't hold in general for derivatives estimated by finite differences.
I think the problem lies in the follow observation:
np.gradient does not, in general, assume the input coordinates x to be uniform. But I think this expression of the chain rule does, which I suspect is implicit in the call np.gradient(x). When we call np.gradient(f, x) with nonuniform x, we are really performing an interpolation for each interior point, rather than a true centered-difference...
No, I would not expect
grad1to equalgrad2for non-uniform spacing. The second-order central difference equation for the first derivative with non-uniform spacing is shown in thenp.gradientdocumentation. When you don't specify the spacing, numpy assumes a constant spacing (as you've said) and makes use of the simplified formula that goes with that assumption, i.e.f' = (f[1:] - f[:-1])/(2h)(ignoring the endpoints). When you attempt to perform the derivative as you have done, you end up withf' = (f[1:] - f[:-1])/(x[1:] - x[:-1])(again ignoring the endpoints). This is not equivalent to the non-uniform equation since the weights are incorrect. Therefore, your results are wrong.