I'm, working on a a DL project and using pytorch lightning and torchmetrics.
I'm using a metric that is just irrelevant for examples, which means that there exist batches for which this metric will get a NaN value. The problem is the aggregated value for the function epoch-wise is also computed to be NaN because of that. Is there a possible workaround? torchmetrics is very convenient and I would like to avoid a switch.
I have seen the torchmetrics.MeanMetric
object (and similar ones) but I couldn't make it work. If the solution goes through this kind of object I would very much appreciate an example.