Arth1234
Arth1234

Reputation: 1

I keep getting the same error when using HuggingFacePipeline

I am a beginner in generative AI and currently, I watching a tutorial about it but have met an issue that cannot resolve.

from langchain.llms import HuggingFacePipeline
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
import os
    
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "my api" 
model_id = "google/flan-t5-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)


pipeline = pipeline("text2text-generation", model=model, tokenizer=tokenizer, max_length=128)
local_llm = HuggingFacePipeline(pipeline=pipeline)

prompt = PromptTemplate(
    input_variables=["name"],
    template="Can you tell me about the politician {name}"
)

chain = LLMChain(llm=local_llm, prompt=prompt)
chain.run("Donald Trump")

I keep getting the error

ValueError: The following model_kwargs are not used by the model: ['return_full_text'] (note: typos in the generate arguments will also show up in this list)

I've tried using Jupyter Notebook, Google Colab, Pycharm and yet the issuepersists

I tried asking chatgpt but to no avail.

Upvotes: 0

Views: 1428

Answers (2)

j3ffyang
j3ffyang

Reputation: 2470

Since some LangChain's APIs have been deprecated, I slightly modified your code, with commenting out the unused modules for your reference and this one should work

from langchain_community.llms import HuggingFacePipeline
from langchain.prompts import PromptTemplate
# from langchain_community.llms import HuggingFaceHub
from langchain.chains import LLMChain
# from langchain import PromptTemplate, HuggingFaceHub, LLMChain
# from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
# import os

# os.environ["HUGGINGFACEHUB_API_TOKEN"] = "my api"
model_id = "google/flan-t5-xxl"
# tokenizer = AutoTokenizer.from_pretrained(model_id)
# model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
# 
# pipeline = pipeline("text2text-generation", model=model, tokenizer=tokenizer, max_length=128)
# local_llm = HuggingFacePipeline(pipeline=pipeline)

from langchain_community.llms import HuggingFaceEndpoint
local_llm = HuggingFaceEndpoint(
    repo_id=model_id, max_new_tokens=128, temperature=0.5)

prompt = PromptTemplate(
    input_variables=["name"],
    template="Can you tell me about the politician {name}"
)

chain = LLMChain(llm=local_llm, prompt=prompt)
print(chain.invoke("Donald Trump"))

PS. HuggingFaceEndpoint will be replacing HuggingFaceHub

Upvotes: 1

fam-woodpecker
fam-woodpecker

Reputation: 707

I believe this is a current known issue with how HuggingFace has implemented their error checks and how it throws errors. It is deliberate, but recent so some models or pipelines may still need to be updated to accomodate this.

See here for the same error, which highlights a commit which broke LLMChain.

You may be able to downgrade your libraries to make things work, you will need to just find the versions of each library that are compatible with one another from before that specific commit.

Upvotes: 0

Related Questions