Why is TensorFlow XLA in experimental status

645 views Asked by At

I'm interested in using XLA for the training with the custom Device (FPGA, ...).
However, I learned that XLA is now in experimental status from developer's tutorial.

https://www.tensorflow.org/performance/xla/

I did not get the reason why XLA is in experimental status.
Is there any big issue about XLA except for the performance improvement?

Thanks

1

There are 1 answers

1
Lescurel On

XLA is still very new : it was released on March 2017.

As stated of the Tensorflow XLA page :

Note: XLA is experimental and considered alpha. Most use cases will not see improvements in performance (speed or decreased memory usage). We have released XLA early so the Open Source Community can contribute to its development, as well as create a path for integration with hardware accelerators.

If it was released, it is because the development team wants the feedback and the contributions of the Open Source community to the project.

This is backed up by this statement on the Google Developpers Blog :

XLA is still in early stages of development. It is showing very promising results for some use cases, and it is clear that TensorFlow can benefit even more from this technology in the future. We decided to release XLA to TensorFlow Github early to solicit contributions from the community and to provide a convenient surface for optimizing TensorFlow for various computing devices, as well as retargeting the TensorFlow runtime and models to run on new kinds of hardware.

So why is it considered experimental ? Simply because there's a lot of use case and hardware that have not been tested. The benchmarks don't always show the expected improvements.

It's possible that you encounter some bugs while using it, and you are encouraged to signal them via the github issue page of the project.