There are many complex problems in which inference depends strongly on initial conditions. I am looking for alternatives in order to avoid stucking in a local minimum, and I understand that PSO would be a good alternative.
Since SciPy has a module to make PSO, I'm wondering if there is a way from gpflow to use it.
To better understand the issue, I added this example. From this link you can download de data. And this is a model I tried (working in a jupyter notebook, with gpflow 2.9.0):
import gpflow
import numpy as np
import pandas as pd
import tensorflow_probability as tfp
import tensorflow as tf
# Identify the pair target-ligand
pdb_target_ligand = '2NRU_CEAYRKIZESVQSN-UHFFFAOYSA-N'
# Load the dataset:
#regressor vector
Xt = np.load('Xtf_'+pdb_target_ligand+'.npy')
yt = np.load('ytf_'+pdb_target_ligand+'.npy')
#validation:
Xval = np.load('Xval_'+pdb_target_ligand+'.npy')
yval = np.load('yval_'+pdb_target_ligand+'.npy')
gpflow.config.set_default_summary_fmt("notebook")
k1 = gpflow.kernels.SquaredExponential(variance=np.array(5*np.std(yt)),
lengthscales=np.random.rand(Xt.shape[1]),
active_dims=list(la for la in range(Xt.shape[1])))
k1.lengthscales.prior = tfp.distributions.Normal(np.zeros(Xt.shape[1]),np.ones(Xt.shape[1]))
k1.variance.prior = tfp.distributions.Normal(np.zeros(1),np.ones(1))
mf1 = gpflow.functions.Constant(c=0.0)
model = gpflow.models.GPR((Xt, yt), kernel=k1, mean_function=mf1)
opt = gpflow.optimizers.Scipy()
opt.minimize(model.training_loss, model.trainable_variables, options=dict(maxiter=1_000))
model
y_mean, y_var = model.predict_y(Xval)
minre = abs(1-yval/y_mean)*100
print(minre)
print(yval, y_mean)