Passing custom attributes from TF op to TFL (MLIR)

190 views Asked by At

We are experimenting to have our own MLIR stack to import TFL models and compile them for a specific accelerator. We are also building our own runtime/simulator to run these imported models. Our current way of working is that we freeze the TF.keras model, convert to TFL and then use flatbuffer_translate to get MLIR tfl dialect.

Towards this goal, however, I need to pass some attributes with some operations special to our target architecture. I initially wanted to pass these attributes with an operation such as conv2d. However, I don't know a way (if at all possible) to extend such operations that are natively defined / supported by tfl.

I then tried to define and register a custom TF operation with its custom attributes. The semantics of the operation would be an identity function but I just intended to use it as a placeholder to pass my attributes. Once I tried this, I saw that the resulting TFL MLIR contains my custom op, however, the attributes are encoded into an opaque type with a byte stream as its value.

I could not find much documentation related to how I can decode these attributes. I'd appreciate any tips on decoding or any other suggestion to help achieving our goal.

Thanks!

1

There are 1 answers

0
jpienaar On

How are you encoding it in the input? I'm guessing you are seeing it encoded as an AttrValue (https://github.com/tensorflow/tensorflow/blob/b1e813e2ec9634ec0e6562b836e372e393f3de43/tensorflow/core/framework/attr_value.proto#L18) and so you'd decode it as you would a protobuf normally.