Reputation: 434
When I am using HF I keep getting this warning and the models seem to keep hallucinating:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")# Let's chat for 5 lines
for step in range(5):# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
The output:
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: What is love?
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: I love lamp
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: I love lamp
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: Only lamp
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: I love lamp
I tried tokenizer.padding_side='left'
and still nothing
Upvotes: 0
Views: 1117
Reputation: 1
I've encountered the same problem. According to the model description right padding is correct:
DialoGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
Except for the warning, it's working as intended: DialoGPT is based on GPT-2 which is on the older (2019) and smaller side, so its output can seem incoherent compared to state of the art LLMs. You can try testing with more prompts to get a feel for it.
To disable the warning, add the following before you create the tokenizer:
import logging
logging.getLogger('transformers').setLevel(logging.ERROR)
Upvotes: 0
Reputation: 1
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPTmedium",padding_side="left")
For decoder only model, the padding position is necessary and padding_side is a parameter here not a function.
Upvotes: 0