Tensorflow cannot quantize reshape function

911 views Asked by At

I am going to train my model quantization aware. However, when i use it , the tensorflow_model_optimization cannot quantize tf.reshape function , and throws an error.

  1. tensorflow version : '2.4.0-dev20200903'
  2. python version : 3.6.9

the code:

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '3'
from tensorflow.keras.applications import VGG16
import tensorflow_model_optimization as tfmot
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
quantize_model = tfmot.quantization.keras.quantize_model
inputs = keras.Input(shape=(784,))
# img_inputs = keras.Input(shape=(32, 32, 3))

dense = layers.Dense(64, activation="relu")
x = dense(inputs)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10)(x)
outputs = tf.reshape(outputs, [-1, 2, 5])
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")

# keras.utils.plot_model(model, "my_first_model.png")


q_aware_model = quantize_model(model)

and the output:

Traceback (most recent call last):

  File "<ipython-input-39-af601b78c010>", line 14, in <module>
    q_aware_model = quantize_model(model)

  File "/home/essys/.local/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py", line 137, in quantize_model
    annotated_model = quantize_annotate_model(to_quantize)

  File "/home/essys/.local/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py", line 210, in quantize_annotate_model
    to_annotate, input_tensors=None, clone_function=_add_quant_wrapper)
...

  File "/home/essys/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 667, in wrapper
    raise e.ag_error_metadata.to_exception(e)

TypeError: in user code:


    TypeError: tf__call() got an unexpected keyword argument 'shape'

If somebody know, please help ?

1

There are 1 answers

0
dtlam26 On

The reason behind is because your layer is not yet support for QAT at the moment. If you want to quantize it, you have to self writing your quantization by quantize_annotate_layer and pass it through quantize_scope and apply to your model by quantize_apply as describe in here: https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide?hl=en#quantize_custom_keras_layer

I have create a batch_norm_layer in here as an example

Tensorflow 2.x is not complete for QAT layer, pls consider using tf1.x by adding FakeQuant after operators.