Partial derivatives of Gaussian Process wrt features

725 views Asked by At

Given a Gaussian Process Model with multidimensional features and scalar observations, how do I compute derivatives of the output wrt to each input, in GPyTorch or GPFlow (or scikit-learn)?

1

There are 1 answers

0
STJ On

If I understand your question correctly, the following should give you what you want in GPflow with TensorFlow:

import numpy as np
import tensorflow as tf
import gpflow

### Set up toy data & model -- change as appropriate:
X = np.linspace(0, 10, 5)[:, None]
Y = np.random.randn(5, 1)
data = (X, Y)
kernel = gpflow.kernels.SquaredExponential()
model = gpflow.models.GPR(data, kernel)
Xtest = np.linspace(-1, 11, 7)[:, None]  # where you want to predict

### Compute gradient of prediction with respect to input:
# TensorFlow can only compute gradients with respect to tensor objects,
# so let's convert the inputs to a tensor:
Xtest_tensor = tf.convert_to_tensor(Xtest)  

with tf.GradientTape(
        persistent=True  # this allows us to compute different gradients below
) as tape:
    # By default, only Variables are watched. For gradients with respect to tensors,
    # we need to explicitly watch them:
    tape.watch(Xtest_tensor)

    mean, var = model.predict_f(Xtest_tensor)  # or any other predict function

grad_mean = tape.gradient(mean, Xtest_tensor)
grad_var = tape.gradient(var, Xtest_tensor)