I am learning to fit the gaussian process using tensorflow-probability. Since my model requires fitting a bunch of `f`

functions, I have to build a batch of gaussian processes. Therefore, I hope to build them all in one gp function. However, I have trouble in programming with tensorflow under certain situations.

If my summary is confusing, let me demonstrate my problem with some examples.

Suppose that I want to build a GP with Gaussian kernel with `parameter`

shape as `(5,3)`

, and with `index_points`

shape as `(120, 5, 3)`

. I can set `feature_ndims=2`

and set up as the first coding block (I made up the index points just to make sure the shape matches).

```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
tfp_kernels = tfp.positive_semidefinite_kernels
plt.style.use('ggplot')
sns.set_context('notebook')
ff = lambda x, feature_dim, num_gp: \
np.sin(10*x) * np.exp(-x) + \
np.random.normal(0, 1, [num_gp, np.shape(x)[0], feature_dim])
num_gp=5
feature_dim=3
zero_vector = np.zeros([num_gp, feature_dim])
one_vector = np.ones([num_gp, feature_dim])
input_signal = ff(np.linspace(-1., 1., 120)[..., np.newaxis], feature_dim, num_gp)
print(input_signal.shape)
input_signal = np.reshape(input_signal, [120, num_gp, feature_dim])
print(input_signal.shape)
rv_amp_1 = tfd.LogNormal(loc=1.0+zero_vector,
scale=one_vector,
name='rv_amp_1')
print(rv_amp_1.sample(1).shape)
vec_rv_amp_1 = tfd.Independent(distribution=rv_amp_1,
reinterpreted_batch_ndims=1,
name='vec_rv_amp_1')
print(vec_rv_amp_1.sample(1).shape)
rv_scale_1 = tfd.LogNormal(loc=0.1*one_vector,
scale=one_vector)
vec_rv_scale_1 = tfd.Independent(distribution=rv_scale_1,
reinterpreted_batch_ndims=1,
name='vec_rv_scale_1')
kernel_1 = tfp_kernels.ExponentiatedQuadratic(amplitude=tf.squeeze(rv_amp_1.sample(1)),
length_scale=tf.squeeze(rv_scale_1.sample(1)),
feature_ndims=2,
name='Exp_kernel_1')
print(kernel_1.batch_shape)
gp_1 = tfd.GaussianProcess(kernel=kernel_1,
index_points=input_signal,
observation_noise_variance=0.1,
name='gp_1')
sample_1 = gp_1.sample(1)
print(sample_1.shape)
print(gp_1.covariance())
print(gp_1.log_prob(sample_1))
```

It works and gives me the following shape result:
`kernel_1.batch_shape = (5, 3)`

,
`sample_1.shape = (1, 5, 3, 120)`

,
`gp_1.covariance().shape = (5, 3, 120, 120)`

, and
`gp_1.log_prob(sample_1).shape = (1, 5, 3)`

.

However, if I want to modify the `index_point.shape = (10, 120, 5, 3)`

while keep everything else the same. The python console yields a bunch of error messages as follows. My expected output shape are
`kernel_1.batch_shape = (5, 3)`

,
`sample_1.shape = (1, 10, 5, 3, 120)`

,
`gp_1.covariance().shape = (10, 5, 3, 120, 120)`

,
`gp_1.log_prob(sample_1).shape = (1, 10, 5, 3)`

.
(Not sure if it is the correct order, but it should contain 10 within the tuple)

```
Traceback (most recent call last):
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1659, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 10 and 3 for 'Exp_kernel_1_44/truediv' (op: 'RealDiv') with input shapes: [10,120,120], [5,3,1,1].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 56, in <module>
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow_probability/python/distributions/gaussian_process.py", line 270, in __init__
kernel.matrix(self.index_points, self.index_points),
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow_probability/python/positive_semidefinite_kernels/positive_semidefinite_kernel.py", line 550, in matrix
return self._apply(x1, x2, param_expansion_ndims=2)
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow_probability/python/positive_semidefinite_kernels/exponentiated_quadratic.py", line 118, in _apply
exponent /= length_scale**2
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 812, in binary_op_wrapper
return func(x, y, name=name)
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 920, in _truediv_python3
return gen_math_ops.real_div(x, y, name=name)
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 6897, in real_div
"RealDiv", x=x, y=y, name=name)
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1823, in __init__
control_input_ops)
File "/MMAR_q/venv/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1662, in _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 10 and 3 for 'Exp_kernel_1_44/truediv' (op: 'RealDiv') with input shapes: [10,120,120], [5,3,1,1].
```

Basically, I just want to generalize the input index points by having additional batch dimension as https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/GaussianProcess introduces. But it seems that my current setting only allows the leftmost dimension as the `e`

without additional `b1, ..., bB`

.

Does anyone know how to handle such vectorized gp fitting situation?

Thanks in advance.