The class labels for the two class model is 0, 1, 0, 0, etc. There is only one label per input sequence. The labels are set in a python list and converted to torch.Tensor. (reading from a csv file - straight forward).
its value: tensor(0) its shape is shown as: torch.Size([])
when i create the batch, etc. and set the parameters in the model (huggingface sequenceclassification model)
for each batch iteration,
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_token_type_ids = batch[2].to(device)
b_labels = batch[3].to(device)
and then set to the model (as part of training)
loss, logits = model(b_input_ids,
attention_mask=b_input_mask,
token_type_ids=b_token_type_ids,
labels=b_labels)
The error message is as follows:
ValueError
Traceback (most recent call last)
<ipython-input-17-b3a36df2f659> in <module>()
89 attention_mask=b_input_mask,
90 token_type_ids=b_token_type_ids,
---> 91 labels=b_labels)
92
93 # Accumulate the training loss over all of the batches so that we can
ValueError: too many values to unpack (expected 2)
This was code I was able to use and train a few days ago. (I have a saved model). The error message (I assume ) probably has something to do with the size etc.
Has anyone hit a similar problem? thanks.