What is the correct way of using mixed_precision with tensorflow-rocm?

58 views Asked by At

From what I can read, mixed precision has been supported for a long time by rocm and in the TensorFlow-rocm upstream package. However, when I try to use it, tf still reports compatibility issues:

Mixed precision compatibility check (mixed_float16): WARNING

because there is no output from the function tf.config.experimental.get_device_details(). Mixed precision is implemented as follows: tf.keras.mixed_precision.set_global_policy("mixed_float16") after ignoring the warnings that this is not a NVidia GPU.

Since this has been supported for so long, I wonder if there is something wrong with this approach. If it has been supported so long, why does it not provide any GPU details, for example?

0

There are 0 answers