RuntimeError: Number of consecutive failures exceeded the limit of 3 during Keras Tuner search

693 views Asked by At

I'm encountering a RuntimeError when using Keras Tuner to search for the best hyperparameters for my image segmentation model. The error indicates that the number of consecutive failures has exceeded the limit of 3. Below is the full error message:

Exception has occurred: RuntimeError
Number of consecutive failures exceeded the limit of 3.
Traceback (most recent call last):
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\base_tuner.py", line 273, in _try_run_and_update_trial
    self._run_and_update_trial(trial, *fit_args, **fit_kwargs)
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\base_tuner.py", line 238, in _run_and_update_trial
    results = self.run_trial(trial, *fit_args, **fit_kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\tuner.py", line 314, in run_trial
    obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\tuner.py", line 233, in _build_and_fit_model
    results = self.hypermodel.fit(hp, model, *args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\hypermodel.py", line 149, in fit
    return model.fit(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras\engine\training.py", line 1697, in fit
    raise ValueError(
ValueError: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`.
  File "C:\AutomationEdge\Workflows\WF2\Classificacao_Documentos\Source\test.py", line 105, in <module>
    tuner.search(train_generator,
RuntimeError: Number of consecutive failures exceeded the limit of 3.
Traceback (most recent call last):
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\base_tuner.py", line 273, in _try_run_and_update_trial
    self._run_and_update_trial(trial, *fit_args, **fit_kwargs)
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\base_tuner.py", line 238, in _run_and_update_trial
    results = self.run_trial(trial, *fit_args, **fit_kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\tuner.py", line 314, in run_trial
    obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\tuner.py", line 233, in _build_and_fit_model
    results = self.hypermodel.fit(hp, model, *args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras_tuner\src\engine\hypermodel.py", line 149, in fit
    return model.fit(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\sergi\AppData\Roaming\Python\Python311\site-packages\keras\engine\training.py", line 1697, in fit
    raise ValueError(
ValueError: Unexpected result of `train_function` (Empty logs). Please use `Model.compile(..., run_eagerly=True)`, or `tf.config.run_functions_eagerly(True)` for more information of where went wrong, or file a issue/bug to `tf.keras`.

This error occurs during the .search() method of Keras Tuner. Here is the relevant portion of my code:

tuner.search(train_generator, 
             steps_per_epoch=len(X_train) // 16, 
             validation_data=(X_test, y_test), 
             epochs=50, 
             callbacks=callbacks)

My images are resized to 128x128 pixels as required, and I've also addressed an earlier issue with train_test_split resulting in an empty training set. However, when I try to run the search method, I get the runtime error mentioned above.

Notably, when I print out the shapes of the images and masks from the train_generator, it seems that my batch size is 1, which is not what I was expecting.

Additionally, I've made sure that the model compiles and trains correctly outside of the Keras Tuner context.

I'm seeking advice on what could be causing this issue and how to get more detailed error logs to help with troubleshooting. Suggestions on how to proceed or debug this error would be very helpful.

Full code

import os
import numpy as np
from tensorflow import keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D, MaxPooling2D, UpSampling2D, Input
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.model_selection import train_test_split
import cv2
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from kerastuner import RandomSearch
import matplotlib.pyplot as plt
import tensorflow as tf

tf.config.run_functions_eagerly(True)

# Path to the directory with training images and edge masks
train_images_dir = r'C:\AutomationEdge\nota_fiscal\Nova pasta\original'
border_masks_dir = r'C:\AutomationEdge\nota_fiscal\Nova pasta\borda/'

# Function to load images
def load_images(directory):
    images = []
    for filename in sorted(os.listdir(directory)):
        if filename.endswith(".jpg"): # or .png if your images are in that format
            img = cv2.imread(os.path.join(directory, filename))
            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # convert to RGB
            img = cv2.resize(img, (128, 128)) # resize images if necessary
            images.append(img)
    return np.array(images)

# Loading the dataset
train_images = load_images(train_images_dir)
border_masks = load_images(border_masks_dir)
border_masks = border_masks / 255.0 # Normalizing masks to [0, 1]

# Splitting the dataset into training and testing
X_train, X_test, y_train, y_test = train_test_split(train_images, border_masks, test_size=0.1)

# Creating data generators with data augmentation for training
data_gen_args = dict(rotation_range=10,
                     width_shift_range=0.1,
                     height_shift_range=0.1,
                     shear_range=0.1,
                     zoom_range=0.1,
                     horizontal_flip=True,
                     fill_mode='nearest')

image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)

# Provide the same seeds and keyword arguments to the flow of generators to ensure matching of images and their masks
seed = 1
image_datagen.fit(X_train, augment=True, seed=seed)
mask_datagen.fit(y_train, augment=True, seed=seed)

image_generator = image_datagen.flow(X_train, batch_size=16, seed=seed)
mask_generator = mask_datagen.flow(y_train, batch_size=16, seed=seed)

# Combine generators to create a generator that provides images and their corresponding masks
train_generator = zip(image_generator, mask_generator)

callbacks = [
    EarlyStopping(patience=10, verbose=1),
    ModelCheckpoint('model-best.h5', verbose=1, save_best_only=True, save_weights_only=True)
]

# Function to create the model to be used by Keras Tuner
def build_model(hp):
    inputs = Input(shape=(128, 128, 3))
    conv1 = Conv2D(
        hp.Int('conv1_units', min_value=32, max_value=256, step=32), 
        (3, 3), activation='relu', padding='same')(inputs)
    pool1 = MaxPooling2D((2, 2))(conv1)
    conv2 = Conv2D(
        hp.Int('conv2_units', min_value=32, max_value=256, step=32),
        (3, 3), activation='relu', padding='same')(pool1)
    up1 = UpSampling2D((2, 2))(conv2)
    outputs = Conv2D(1, (1, 1), activation='sigmoid')(up1)
    model = Model(inputs=[inputs], outputs=[outputs])
    
    model.compile(
        optimizer=Adam(
            hp.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='LOG')),
        loss='binary_crossentropy', 
        metrics=['accuracy']
    )
    return model

# Instantiating RandomSearch
tuner = RandomSearch(
    build_model,
    objective='val_accuracy',
    max_trials=5,  # Number of variations to be tested
    executions_per_trial=1,  # Number of models to train for each variation
    directory='random_search',  # Directory to store logs
    project_name='edge_detection'
)

for imgs, masks in train_generator:
    print(imgs.shape, masks.shape)  # Should be something like: (16, 128, 128, 3) (16, 128, 128, 1)
    break  # This is just to test one batch


# Running the search for the best hyperparameters
tuner.search(train_generator, 
             steps_per_epoch=len(X_train) // 16, 
             validation_data=(X_test, y_test), 
             epochs=50, 
             callbacks=callbacks)
0

There are 0 answers