How to load and run Intel-Tensorflow Model on ML.NET

758 views Asked by At

Environment: Tensorflow 2.4, Intel-Tensorflow 2.4

As far as I know, Tensorflow model in pb format can be loaded on ML.NET.

However, I'm using a quantization package LPOT (https://github.com/intel/lpot) which utilizes Intel optimized Tensorflow (https://github.com/Intel-tensorflow/tensorflow). Even though Intel-Tensorflow is built on Tensorflow, it uses some Quantized Op which has no registered OpKernel on Tensorflow (e.g. 'QuantizedMatmulWithBiasAndDequantize' is deprecated on TF). As a result, the quantized model cannot be run under native Tensorflow environment without installing Intel-Tensorflow.

My goal is to run this quantized pb Intel-Tensorflow model on ML.NET, does anyone know if Intel-Tensorflow is supported on ML.NET? Or is there any other way to do so?

Any help/suggestion is greatly appreciated.

1

There are 1 answers

2
Gopika - Intel On

The oneDNN supported in ML.NET depends on the ML.NET integration. If they enable oneDNN in the TensorFlow C++ API, ML.NET could have oneDNN support.

enter image description here

You can try installing stock Tensorflow 2.5 in your ML.NET environment with intel OneDNN enabled. You can install stock Tensorflow wheel from this link: https://pypi.org/project/tensorflow/#files

To install the wheel file: pip install __.whl.

To enable oneDNN optimizations, please set the environment variable TF_ENABLE_ONEDNN_OPTS:

set TF_ENABLE_ONEDNN_OPTS=1

To ensure verbose log is displayed: set DNNL_VERBOSE=1

For more information on oneDNN verbose mode, please refer: https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html

For more information on Intel Optimization for tensorflow, please refer: https://software.intel.com/content/www/us/en/develop/articles/intel-optimization-for-tensorflow-installation-guide.html