Problem regarding predicting uncertainties using Dense Variational

57 views Asked by At

My model contains a DenseVariational layer that I want to use to predict the mean and standard deviation for each data point prediction. I am quite new to this and I am not able to understand whether my model is able to do that. Everytime I try to run the predictions using this code:

predictions = model.predict(X_test)
mean_predictions = predictions[..., 0]  # Extract the mean component
std_predictions = np.exp(predictions[..., 1])  # Extract the standard deviation component

I get the error that my index '1' for std_predictions is out of bounds. I also get only 1 individual int value for mean_predictions but I want the model to return individual values for each prediction.

Here is my model architecture:

# Building the neural network 
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.regularizers import l2 
import tensorflow_probability as tfp
from tensorflow.keras import regularizers

num_models = 5
alpha = 0.002
learning_rate = 0.00005
models = []

def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
    n = kernel_size + bias_size
    c = tf.math.log(tf.exp(0.01) - 1.0)
    return tf.keras.Sequential([
        tfp.layers.VariableLayer(2 * n, dtype=dtype),
        tfp.layers.DistributionLambda(lambda t: tfp.distributions.Independent(
        tfp.distributions.Normal(loc=t[..., :n], scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
        reinterpreted_batch_ndims=1)),
])

def prior_trainable(kernel_size, bias_size=0, dtype=None):
    n = kernel_size + bias_size
    return tf.keras.Sequential([
    tfp.layers.VariableLayer(n, dtype=dtype),
    tfp.layers.DistributionLambda(lambda t: tfp.distributions.Independent(
        tfp.distributions.Normal(loc=t, scale=1.0),
        reinterpreted_batch_ndims=1)),
])

for _ in range(num_models):
    model = Sequential()
    model.add(Dense(256, activation='relu', input_shape=(scaled_features.shape[1],),    kernel_regularizer=regularizers.l2(alpha)))
    model.add(Dropout(0.2))
    model.add(Dense(128, activation='relu',))
    model.add(Dropout(0.2))
    model.add(Dense(64, activation='relu',))
    model.add(Dropout(0.2))
    model.add(Dense(32, activation='relu',))
    model.add(Dropout(0.2))
    model.add(Dense(1))
    tfp.layers.DenseVariational(2, posterior_mean_field, prior_trainable),
    tfp.layers.DistributionLambda(
    lambda t: tfd.Normal(loc=t[..., :1],
                       scale=1e-3 + tf.math.softplus(0.01 * t[..., 1:]))),
    model.compile(loss='mean_squared_error', optimizer=Adam(learning_rate=learning_rate))
    models.append(model)`

I have tried various methods but it doesn't work.

0

There are 0 answers