tobias
tobias

Reputation: 839

How to make Agents not exceed token length in Langchain?

I am currently trying to make use of a ChatGPT plugin in langchain:

from langchain.chat_models import ChatOpenAI
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from langchain.tools import AIPluginTool

tool = AIPluginTool.from_plugin_url("https://www.wolframalpha.com/.well-known/ai-plugin.json")

llm = ChatOpenAI(temperature=0, streaming=True, max_tokens=1000)
tools = load_tools(["requests_all"])
tools += [tool]

agent_chain = initialize_agent(
    tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
# agent_chain.run("what t shirts are available in klarna?")

agent_chain.run("How can I solve dx/dt = a(t)*x + b(t)")

However, I get the error:

InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 5071 tokens (4071 in the messages, 1000 in the completion). Please reduce the length of the messages or completion.

Upvotes: 1

Views: 3618

Answers (1)

netr
netr

Reputation: 76

It looks like the wolframalpha prompt is as large as the base gpt-3.5's allowed context window.

One thing you can do is use the 16k token context model.

llm = ChatOpenAI(temperature=0, streaming=True, max_tokens=1000, model_name="gpt-3.5-turbo-16k")

Hopefully this helps you.

Upvotes: 0

Related Questions