RaggedTensor on TPU

500 views Asked by At

I am trying to train a neural network using TensorFlow that takes as input (tf.keras.layers.Input) a RaggedTensor. It works fine on CPU and GPU but I am really strugling to make it work on TPU. I wanted to know if some of you managed to make it work (not necessarily looking for a direct solution though it'd be great, some tooltips would already be great!). So far the error messages were explicit enough for me to go on, but I now struggle on how to go a bit further.

What I did so far:

  1. I am using tf.data.Dataset to read data from TF_Records but I needed to explicitly transform it into a DistributedDataset to disable prefecthing.
strategy.experimental_distribute_dataset(
    dataset,
    tf.distribute.InputOptions(
        experimental_prefetch_to_device=False
    )
)
  1. I got Compilation failure: Detected unsupported operations when trying to compile graph ... on XLA_TPU_JIT: RaggedTensorToTensor which could be (sort-of) fixed by allowing soft device placement:
tf.config.set_soft_device_placement(True)
  1. I now get stuck with Compilation failure: Input 1 to node '.../RaggedReduceSum/RaggedReduce/RaggedSplitsToSegmentIds/Repeat/SequenceMask/Range' with op Range must be a compile-time constant.. I totally understand why I have this error, I am fully aware of available ops on TPU and in particular that most of the dynamic operations should be determined at compile-time to run on TPU. But I can't think how I could use those ragged-tensors with TPU then...

Any idea would be appreciated :)

P.S.: I haven't seen much news from the TensorFlow team on RaggedTensors on TPU since this answer back in July 2020, but I may have missed a bunch about it... Pointing to the git threads would already be great for me if I can investigate more.

0

There are 0 answers