I am wondering if there is a simple way to implement a GP training in parallel (with multi cores or even with multi instances for a cloud environment) with bayesian optimization.
sklearn.gaussian_process.GaussianProcessRegressor seems don’t support multi cores training. Its n_restarts_optimizer parameters controls the number of random restarts of the optimization in series.