we have three kinds of losses
- loss
- batch_loss
- train_loss
as I understand loss is a tensor, batch loss is the value of the tensor , train_loss is the accumulative value of the batch_loss this is ok for me.
my question is why AllenNLP considered the batch_loss in for batch and did not calculate the cumulative loss for batch_group?
Also I did not understand the need for batch_group inside epoch, and batch inside batch_group
this is my understanding we have epoch inside it we have batch_group inside batch_group we have batch the batch_loss is calculated for batch not for batch_group why?
This is actually a bug, so thanks for pointing that out! There is a PR open now to fix it: https://github.com/allenai/allennlp/pull/4706
batch_group
always consists of just a singlebatch
unless you're usingnum_gradient_accumulation_steps
greater than 1, i.e. you're using gradient accumulation, which is a method for getting a larger effective batch size.See https://medium.com/ai2-blog/tutorial-training-on-larger-batches-with-less-memory-in-allennlp-1cd2047d92ad, for example.