I am trying to build a forecasting model with tft module with Temporal Fusion Transformer,I am getting below error when I am trying to train the model, since I am new to tensorflow, I can't understand fully what does it mean. I thought that simply calling the model train will train the model.
# Train Model
model.train(train_dataset = trainset, # trainset obtain from data_objec using the dataobj.train_test_dataset() method
test_dataset = testset, # testset obtain from data_objec using the dataobj.train_test_dataset() method
loss_function = loss_fn, # Any supported loss function defined in tft.supported_losses
metric='MSE', # Either 'MSE' or 'MAE'
learning_rate=0.0001, # Use higher lr only with valid clipnorm
max_epochs=100,
min_epochs=10,
prefill_buffers=False, # Indicates whether to create a static dataset (requires more memory but trains faster)
num_train_samples=200000, # (NOT USED if prefill_buffers=False)
num_test_samples=50000, # (NOT USED if prefill_buffers=False)
train_batch_size=64, # (NOT USED if prefill_buffers=False, Batch Size specified in data object is used instead)
test_batch_size=128, # (NOT USED if prefill_buffers=False, Batch Size specified in data object is used instead)
train_steps_per_epoch=200, # (NOT USED if prefill_buffers=True)
test_steps_per_epoch=100, # (NOT USED if prefill_buffers=True)
patience=10, # Max epochs to train without further drop in loss value (use higher patience when prefill_buffers=False)
weighted_training=False, # Whether to compute & optimize on the basis of weighted losses
model_prefix='./tft_model',
logdir='/tmp/tft_logs',
opt=None, # provide own optimizer object (default is Adam/Nadam)
clipnorm=0.1, # max global norm applied. Used for stable training. Default is 'None'.
min_delta=0.0001, # min decrease in val. loss to be considered an improvement
shuffle=True)