Prediction with GPytorch DeepGP was poor

47 views Asked by At

[Link to DeepGP definition:] (https://drive.google.com/file/d/1UwU9fz6vqxOQ3F0NSKKmRSgiP6pNnbwo/view?usp=sharing)

I have defined a DeepGP model using GPytorch, trained it, and the predictions are made as:

test_dataset = TensorDataset(test_x, test_a)

test_loader = DataLoader(test_dataset, batch_size=128)

model.eval()
predictive_means, predictive_variances, test_lls = model.predict(test_loader)

And the predicted output (all are almost same values) are as follows:

predictive_means.mean(0)
tensor([6.4420, 6.4423, 6.4429, 6.4441, 6.4414, 6.4440, 6.4440, 6.4435, 6.4432,
        6.4435, 6.4447, 6.4446, 6.4412, 6.4447, 6.4425, 6.4439, 6.4442, 6.4440,
        6.4417, 6.4431, 6.4435, 6.4417, 6.4439, 6.4438, 6.4421, 6.4432, 6.4434,
        6.4421, 6.4431, 6.4424, 6.4412])

Whereas the ground truth is

test_a

tensor([5.4880, 5.4247, 7.8780, 5.5635, 8.0862, 5.9888, 8.3903, 5.5700, 6.0913,
        5.6440, 5.5785, 5.4150, 8.3801, 5.5642, 8.1350, 5.4410, 5.4670, 5.7932,
        5.4650, 8.4411, 5.8117, 5.5729, 7.8776, 5.4746, 5.6451, 8.0486, 6.0792,
        5.4944, 5.6321, 5.7548, 5.5903])

The training as well as test losses are huge. Where am I missing? Or is there a different way to get the actual predicted output?

I tried training the model

  1. with and without scaling the inputs
  2. with different Kernels and their combinations available in GPytorch
  3. checked codes and working of MLL
  4. Tried with different set of inputs (guessing that sometimes the x and y in data may be wrongly matched)
0

There are 0 answers