Reputation: 1533
I have pre-trained BERT model with my head.
I am using a fine-tuned Roberta Model that is unbiased-toxic-roberta trained on Jigsaw Data:
https://huggingface.co/unitary/unbiased-toxic-roberta
Creating the data using pytorch dataset
tokenizer = tr.RobertaTokenizer.from_pretrained("/home/pc/unbiased_toxic_roberta")
train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=512, return_tensors="pt")
class SEDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_data = SEDataset(train_encodings, train_labels)
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
acc = np.sum(predictions == labels) / predictions.shape[0]
return {"accuracy" : acc}
The model adding few layers on top of pre-trained model:
import torch.nn as nn
from transformers import AutoModel
class PosModel(nn.Module):
def __init__(self):
super(PosModel, self).__init__()
self.base_model = tr.RobertaForSequenceClassification.from_pretrained('/home/pc/unbiased_toxic_roberta')
self.dropout = nn.Dropout(0.5)
self.linear = nn.Linear(768, 2) # output features from bert is 768 and 2 is ur number of labels
def forward(self, input_ids, attn_mask):
outputs = self.base_model(input_ids, attention_mask=attn_mask)
# You write you new head here
outputs = self.dropout(outputs[0])
outputs = self.linear(outputs)
return outputs
model = PosModel()
print(model)
Training Step:
Using the TrainingArguments to pass some parameters to the model
training_args = tr.TrainingArguments(
# report_to = 'wandb',
output_dir='/home/pc/1_Proj_hate_speech/results_roberta', # output directory
overwrite_output_dir = True,
num_train_epochs=20, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
learning_rate=2e-5,
warmup_steps=1000, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs3', # directory for storing logs
logging_steps=1000,
evaluation_strategy="epoch"
,save_strategy="epoch"
,load_best_model_at_end=True
)
trainer = tr.Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_data, # training dataset
eval_dataset=val_data, # evaluation dataset
compute_metrics=compute_metrics
)
Running the model
trainer.train()
Error:
TypeError: Caught TypeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/pc/.local/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/pc/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'input_ids'
Upvotes: 2
Views: 17862
Reputation: 95
I had the same issue, I made a function named 'model' and I was calling that function. I think you are doing the same thing in the last. Please check that.
Upvotes: 2
Reputation: 441
It seems your tokenizer is adding the "input_ids" information when encoding the data, but the model doesn't expect this tensor on the input. Maybe you can try to remove this data from train_encodings
and try again.
Upvotes: 2