I am trying to load a Keras model for prediction only (i.e. I do not have to compile the model, per Pepslee's post here).

When I try to use model.predict_generator(), I get:

Using TensorFlow backend.
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/user/pkgs/anaconda2/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/user/pkgs/anaconda2/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/user/pkgs/anaconda2/lib/python2.7/site-packages/keras/utils/data_utils.py", line 559, in _run
    sequence = list(range(len(self.sequence)))
ValueError: __len__() should return >= 0

I am working with Tensorflow version 1.12.0, Keras version 2.2.4. I need to use these versions to ensure compatibility with my cuDNN version, which I have no control over.

How can I get around this error?

EDIT

I was asked for an example. Unfortunately there's too much proprietary info here for me to give much detail, but here are the bare bones (note the model is not actually an LSTM):

class LSTMmodel():

    def __init__(self, hid1 = 10, batch_size=32, mode='test'):
        self.hid_dim_1 = hid1 

        self.t_per_e, self.test_generator = self.read_data()

        #Load the entire fitted model
        model_name = ''.join(glob.glob('*model.h5'))
        self.__model = load_model(model_name, compile=False)

    def read_data(self):

        num_test_minibatches = 10
        test_IDs = range(111, 111+10)
        params = {'list_IDs': test_IDs, 'batch_size': self.batch_size, 'n_vars': 354}

        test_generator = DataGenerator(test_IDs, **params)
        t_per_e = int(len(test_IDs) - self.batch_size + 1)

        return t_per_e, test_generator

    def lstm_model():
        #Model building here. Not needed, since not compiling the model
        return 0

    def lstm_predict(self):
        pred = self.__model.predict_generator(self.test_generator, self.t_per_e)
        return pred

class DataGenerator(keras.utils.Sequence):

    #Other methods in here as necessary

    def __len__(self):
        'Denotes the number of batches per epoch'
        batches_per_epoch = int(np.floor(len(self.list_IDs) - self.batch_size + 1))
        return batches_per_epoch

    def __data_generation(self, other_params_here):
        'Generates data containing batch_size samples'
        return preprocessed_data

def test_lstm():
    test_inst = LSTMmodel(hid1=10) #hid1 is a hyperparameter
    test_prediction = test_inst.lstm_predict()
    return test_prediction


if __name__ == '__main__':
    testvals = test_lstm()

Basically, the workflow is:

1) test_lstm() creates an instance of the LSTMmodel class, and then calls lstm_predict.

2) lstm_predict uses predict_generator which takes in the generator for the test set and the number of examples to generate (steps from here).

3) The generator for the test set is created as an instance of class DataGenerator() in the read_data() method of class LSTMmodel(). Importantly, the test data generator is created in the same way as the training data generator and the validation data generator.

4) self.__model is created by loading a fully trained model in the init method of class LSTMmodel().

How can I get rid of the error?

0 Answers