Pytorch: Having trouble understanding the inline replacement happening

167 views Asked by At

This seems to be a common error people get, but i can't really understand the real cause. I am having trouble figuring out where the inline replacement is happening. My forward function:

def forward(self, input, hidden=None):
    if hidden is None :
        hidden = self.init_hidden(input.size(0))
    out, hidden = self.lstm(input, hidden)
    out = self.linear(out)
    return out, hidden

The training loop

def training(dataloader, iterations,  device):
    torch.autograd.set_detect_anomaly(True)
    model = NModel(662, 322, 2, 1)
    hidden = None
    model.train()
    loss_fn = F.MSELoss()
    optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
    running_loss =  []
    last_loss = 0
    for i, (feature, label) in tqdm(enumerate(dataloader)):
        optimizer.zero_grad()
        outputs, hidden = model(feature, hidden)
        loss =  loss_fn(outputs, label)
        print("loss item" , loss.item())
        running_loss.append(loss.item())
        loss.backward(retain_graph=True)
        optimizer.step()
        if i%1000 == 0:
            last_loss = len(running_loss) /1000
    return last_loss

The error's stack trace

Traceback (most recent call last):
  File "main.py", line 18, in <module>
    main()
  File "main.py", line 14, in main
    training(dataloader=training_loader, iterations=3, device=0)
  File "/home//gitclones/feature-extraction/training.py", line 30, in training
    loss.backward(retain_graph=True)
  File "/home/miniconda3/envs/pytorch-openpose/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/home//miniconda3/envs/pytorch-openpose/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward
    allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [322, 1288]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

when i remove optimizer.step() the code runs but i think there is no backpropagation happening.

[EDIT] Strange how now it works when i don't pass the hidden state as input in forward pass

def forward(self, input, hidden=None):
    if hidden is None :
        hidden = self.init_hidden(input.size(0))
    out, hidden = self.lstm(input)
    out = self.linear(out)
    return out, hidden
1

There are 1 answers

0
Deusy94 On

Adding hidden = tuple([each.data for each in hidden]) after your optimizer.step() fix the error, but zeros the gradient on the hidden value. You can achieve the same effect with hidden = tuple([each.detach() for each in hidden])