why the output of the tape.gradient is None? (Tensorflow)

38 views Asked by At
EPOCHS = 10
LR = 0.001

loss_object = MeanSquaredError()
optimizer = Adam(learning_rate = LR)

load_metrics()
@tf.function
def trainer():
    global train_ds, train_loss, model
    global optimizer, loss_object
    for x, y, z in train_ds:
        with tf.GradientTape() as tape:
            predictions = model([x, y])
            print(f"prediction: {type(predictions)}")
            loss = loss_object(z, predictions)
            print(f"loss: {loss}")

        gradients = tape.gradient(loss, model.trainable_variables)
        optimizer.apply_gradients(zip(gradients, model.trainable_variables))

        train_loss(loss)

After run above code, Some error is occured that ValueError: No gradients provided for any variable:

So, I searched error part. Then I found that the output of tape.gradient(loss, model.trainable_variables) is None.

Why this error is occured?

After running the above code, some errors occurred. <ValueError: No gradients were provided for any variable>, so I searched the error part. Then, I found that the output of tape.gradient(loss, model.trainable_variables) is None. Why is this error occur?

Actual output of gradients = tape.gradient(loss, model.trainable_variables) : [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]

0

There are 0 answers