Reputation: 1041
After updating my code to replace LLMChain (deprecated) with the new pipeline approach, I am getting an error because Constitutional Chain does expect the old LLMChain format. Can anyone suggest a solution? Is there a newer way to do this?
#from langchain.chains import LLMChain
from langchain.prompts import ChatPromptTemplate
from langchain.chains.constitutional_ai.base import ConstitutionalChain
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
# Initialize the model
llm = ChatGoogleGenerativeAI(
google_api_key=GEMINI_API_KEY, model="gemini-1.5-flash", temperature=0.3)
# Create a chat chain for creating text.
#chat_chain = LLMChain(llm=llm, prompt=ChatPromptTemplate.from_template("{query}"))
# Create a runnable sequence for the chat chain
chat_chain = ChatPromptTemplate.from_template("{query}") | llm | StrOutputParser()
# Create a principle for our constitutional chain.
principle = ConstitutionalPrinciple(
name="Fear of Spiders",
critique_request="The model should not include spiders in stories it writes.",
revision_request="Modify the story to be about animals other than spiders.",
)
constitutional_chain = ConstitutionalChain.from_llm(
chain=chat_chain,
constitutional_principles=[principle],
llm=llm
)
# Set the input query for the chat chain.
query = {"query": "Please give me the main events of a story about three household pets."}
# Run the constitutional chain using the query as the first input.
result = constitutional_chain.invoke(query)
print(result["output"])
This is the error:
AttributeError: 'RunnableSequence' object has no attribute 'get'
Upvotes: 0
Views: 48