I am working on a project for GNN_based decoder. A project to improve code using Nvidia's Sienna library. I want to measure FLOPs, but it's all custom models, so I don't know how to start. I want to measure the amount of computation of the call function set to @tf.funcion, what should I do

@tf.function(jit_compile=True) def call(self, batch_size, ebno_db):

    # no rate-adjustment for uncoded transmission or es_no scenario
    if self._decoder is not None and self._es_no==False:
        no = ebnodb2no(ebno_db, self._num_bits_per_symbol, self._k/self._n)
    else: #for uncoded transmissions the rate is 1
        no = ebnodb2no(ebno_db, self._num_bits_per_symbol, 1)

    b = self._binary_source([batch_size, self._k])

    if self._encoder is not None:
        c = self._encoder(b)
    else:
        c = b

    # check that rate calculations are correct
    assert self._n==c.shape[-1], "Invalid value of n."

    # zero padding to support odd codeword lengths
    if self._n%2==1:
        c_pad = tf.concat([c, tf.zeros([batch_size, 1])], axis=1)
    else: # no padding
        c_pad = c
    x = self._mapper(c_pad)

    y = self._channel([x, no])
    llr = self._demapper([y, no])

    # remove zero padded bit at the end
    if self._n%2==1:
        llr = llr[:,:-1]
    

    # and run the decoder
    if self._decoder is not None:
        llr = self._decoder(llr)
        
    if self._return_infobits:
        return b, llr
    else:
        return c, llr

How much calculation will it take to perform the decoding process? Below is the link to the github of that code.

https://github.com/NVlabs/gnn-decoder/blob/master/GNN_decoder_BCH.ipynb

I'm curious about how to measure FLOPs even if it's not deep learning as above.

0

There are 0 answers