IISsENII
IISsENII

Reputation: 23

ValueError: This output parser only works on ChatGeneration

I am following Krish NAik's tutorial on Langchain and agents. I'm trying to build an agent executer using open_ai_tools_agent. My tools include a Wikipedia tool and an Arxiv tool. However, whenever I query, I get an error:

This output parser only works on ChatGeneration.

from langchain.agents import create_openai_tools_agent
agent = create_openai_tools_agent(llm,tools,prompt)

## Agent Executer
from langchain.agents import AgentExecutor
agent_executer = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executer.invoke({"input":"What's the paper 1605.08386 about?"})

I tried adding the output parser error handling parameter: agent_executer = AgentExecutor(agent=agent, tools=tools, verbose=True,handle_parsing_errors=True)

But this did not fix it.

    153 try:
    154     self._validate_inputs(inputs)
    155     outputs = (
--> 156         self._call(inputs, run_manager=run_manager)
    157         if new_arg_supported
    158         else self._call(inputs)
    159     )
    161     final_outputs: Dict[str, Any] = self.prep_outputs(
    162         inputs, outputs, return_only_outputs
    163     )
    164 except BaseException as e:
...
---> 60         raise ValueError("This output parser only works on ChatGeneration output")
     61     message = result[0].message
     62     return parse_ai_message_to_openai_tool_action(message)

Upvotes: 2

Views: 823

Answers (2)

Robert Auny
Robert Auny

Reputation: 1

Here's what worked for me, replace your original output that caused the issue. Then outputs will be your new return that should work.


from langchain_core.outputs import ChatGeneration

from langchain_core.messages import AIMessage

outputs = ChatGeneration(
    message = AIMessage(
        content = <your original output>,
        type = 'ai'
    )
).message

Upvotes: 0

Faisal Gillani
Faisal Gillani

Reputation: 1

You load the Ollama LLAMA2 LLM model using the ChatOllama class from the langchain_community.chat_models module.

from langchain_community.chat_models import ChatOllama
## Load Ollama LAMA2 LLM model
llm=ChatOllama(model="llama2")
llm

By following this step, you should be able to diagnose and fix the issue with your agent executor

Upvotes: 0

Related Questions