Reshape of Inducing Variables - GPflow

434 views Asked by At

I have an SGPR model:

import numpy as np
import gpflow

X, Y = np.random.randn(50, 2), np.random.randn(50, 1)
Z1 = np.random.randn(13, 2)

k = gpflow.kernels.SquaredExponential()
m = gpflow.models.SGPR(data=(X, Y), kernel=k, inducing_variable=Z1)

And I would like to assign inducing variable but with different shape, like:

Z2 = np.random.randn(29, 2)
m.inducing_variable.Z.assign(Z2)

But if I do it, I got:

ValueError: Shapes (13, 2) and (29, 2) are incompatible

is there a way to reassign the inducing variables without redefining the model?

Context: Instead of optimizing the model with the inducing variables, I would like to optimize the model without optimizing the inducing variables, manually reassigning the inducing variables at each step of the optimization.

1

There are 1 answers

2
STJ On BEST ANSWER

UPDATE: This issue is resolved by https://github.com/GPflow/GPflow/pull/1594, which will become part of the next GPflow patch release (2.1.4).

With that fix, you don't need a custom class. All you need to do is explicitly set the static shape with None along the first dimension:

inducing_variable = gpflow.inducing_variables.InducingPoints(
    tf.Variable(
        Z1,  # initial value
        trainable=False,  # True does not work - see Note below
        shape=(None, Z1.shape[1]),  # or even tf.TensorShape(None)
        dtype=gpflow.default_float(),  # required due to tf's 32bit default
    )
)
m = gpflow.models.SGPR(data=(X, Y), kernel=k, inducing_variable=inducing_variable)

Then m.inducing_variable.Z.assign(Z2) should work just fine.

Note that in this case Z cannot be trainable, as the TensorFlow optimizers need to know the shape at construction time and don't support dynamic shapes.


Right now (as of GPflow 2.1.2) there is no built-in way to change the shape of inducing variables for SGPR, though it is in principle possible. You can get what you want with your own inducing variable class though:

class VariableInducingPoints(gpflow.inducing_variables.InducingPoints):
     def __init__(self, Z, name=None):
         super().__init__(Z, name=name)
         # overwrite with Variable with None as first element in shape so
         # we can assign arrays with arbitrary length along this dimension:
         self.Z = tf.Variable(Z, dtype=gpflow.default_float(),
             shape=(None, Z.shape[1])
         )

     def __len__(self):
         return tf.shape(self.Z)[0]  # dynamic shape
         # instead of the static shape returned by the InducingPoints parent class

and then do

m = gpflow.models.SGPR(
    data=(X, Y), kernel=k, inducing_variable=VariableInducingPoints(Z1)
)

instead. Then your m.inducing_variable.Z.assign() should work as you like it.

(For SVGP, the size of the inducing variable and the distribution defined by q_mu and q_sqrt has to match, as well as be known at construction time, so in this case changing the number of inducing variables is less trivial.)