LSTM frozen layer containing clip_by_value causing android studio to crash when deployed

20 views Asked by At

I am quite new to to ML but I am trying to deploy a frozen LSTM model into android studio but I am getting this error talks about a clipbyvalue operation being used in the lstm layer which I did not explicitly declare.

java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'ClipByValue' with these attrs. Registered devices: [CPU], Registered kernels: <no registered kernels> [[Node: lstm_1/while/clip_by_value = ClipByValue[T=DT_FLOAT](lstm_1/while/add_2, lstm_1/while/Const, lstm_1/while/Const_1)]]

I am currently on tensorflow 1.8 and have tried higher versions to use TFLite but was met with issues which is why I decided to downgrade to try using the freeze graph function.

Below is my java code in android studio

public class pbClassifier {


    static{
        System.loadLibrary("tensorflow_inference");
    }

    private TensorFlowInferenceInterface inferenceInterface;
    private static final String MODEL_FILE = "model.pb";
    private static final String INPUT_NODE = "lstm_1_input";
    private static final String[] OUTPUT_NODES = {"output/Softmax"};
    private static final String OUTPUT_NODE = "output/Softmax";
    private static final long[] INPUT_SIZE = {1, 200, 6};
    private static final int OUTPUT_SIZE = 7;


    public pbClassifier(final Context context) {
        inferenceInterface = new TensorFlowInferenceInterface(context.getAssets(), MODEL_FILE);
    }

    public float predict(float[] data){
        float[] result = new float[OUTPUT_SIZE];
        inferenceInterface.feed(INPUT_NODE, data, INPUT_SIZE);
        inferenceInterface.run(OUTPUT_NODES);
        inferenceInterface.fetch(OUTPUT_NODE, result);
        return result[1];
    }


}

and here is python code for the training and exporting of the model

model = Sequential()
# RNN layer
model.add(LSTM(128, input_shape = (200, 6), return_sequences = True, kernel_regularizer = l2(0.000001), name = 'lstm_1'))
# Apply Dense operations individually to each time sequence
model.add(TimeDistributed(Dense(64, activation='relu'), name='time_distributed'))
# Flatten layer
model.add(Flatten(name='flatten'))
# Dense layer with ReLu
model.add(Dense(64, activation='relu', name='dense_1'))
# Softmax layer
model.add(Dense(2, activation = 'softmax', name='output'))

# Compile model
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])

from keras.callbacks import ModelCheckpoint

callbacks = [ModelCheckpoint('model.h5', save_weights_only=False, save_best_only=True, verbose=1)]

history = model.fit(X_train, y_train, epochs = 10, validation_split = 0.20, batch_size = 128,
                    verbose = 1, callbacks = callbacks)

from keras import backend as k
from tensorflow.python.tools import freeze_graph, optimize_for_inference_lib

input_node_name = ['lstm_1_input']
output_node_name = 'output/Softmax'
model_name = 'fall_model'

tf.train.write_graph(k.get_session().graph_def, 'models', model_name + '_graph.pbtxt')
saver = tf.train.Saver()
saver.save(k.get_session(), 'models/' + model_name + '.chkp')

freeze_graph.freeze_graph('models/' +model_name + '_graph.pbtxt', None, False, 'models/' +model_name+'.chkp',
                          output_node_name, 'save/restore_all', 'save/Const:0', 'models/frozen_'+model_name+'.pb',
                          True, "")

I would appreciate any suggestions on how I can fix that by stopping the layer from using clip_by_value function or probably changing it to a clip_norm function to see if that works.

Here is also an image from Netron

Many thanks

0

There are 0 answers