Reputation: 65
I am using the GPT-2 pre trained model. the code I am working on will get a sentence and generate the next word for that sentence. I want to print multiple predictions, like the three first predictions with best probabilities! for example if I put in the sentence "I's an interesting ...." predictions: "Books" "story" "news"
is there a way I can modify this code to show these predictions instead of one?!
also there are two parts in the code, I do not understand, what is the meaning of the numbers in (predictions[0, -1, :])
? and why do we put [0]
after predictions = output[0]
?
import torch
from pytorch_transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load pre-trained model tokenizer (vocabulary)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# Encode a text inputs
text = "The fastest car in the "
indexed_tokens = tokenizer.encode(text)
# Convert indexed tokens in a PyTorch tensor
tokens_tensor = torch.tensor([indexed_tokens])
# Load pre-trained model (weights)
model = GPT2LMHeadModel.from_pretrained('gpt2')
# Set the model in evaluation mode to deactivate the DropOut modules
model.eval()
# If you have a GPU, put everything on cuda
#tokens_tensor = tokens_tensor.to('cuda')
#model.to('cuda')
# Predict all tokens
with torch.no_grad():
outputs = model(tokens_tensor)
predictions = outputs[0]
#print(predictions)
# Get the predicted next sub-word
predicted_index = torch.argmax(predictions[0, -1, :]).item()
predicted_text = tokenizer.decode(indexed_tokens + [predicted_index])
# Print the predicted word
#print(predicted_index)
print(predicted_text)
The result for the above code will be :
The fastest car in the world.
Upvotes: 4
Views: 1290
Reputation: 2182
You can use torch.topk
as follows:
predicted_indices = [x.item() for x in torch.topk(predictions[0, -1, :],k=3)]
Upvotes: 2