Reputation: 5458
Im using a conversational agent, with some tools, one of them is a calculator tool (for the sake of example).
Agent initializated as follows:
conversational_agent = initialize_agent(
agent='chat-conversational-react-description',
tools=[CalculatorTool()],
llm=llm_gpt4,
verbose=True,
max_iterations=2,
early_stopping_method="generate",
memory=memory,
# agent_kwargs=dict(output_parser=output_parser),
)
When the CalculatorTool
is being activated, it will return a string output, the agent takes that output and process it further to get to the "Final Answer" thus changing the formatting of the output from the CalculatorTool
For example, for input 10*10
, the tool run()
function will return 100
, which will be propagated back to the agent, that will call self._take_next_step()
and continue processing the output.
It will create a final output similar the result of your prompt of 10x10 is 100
I dont want the added formatting by the LLM, just the output of 100
.
I want to break the chain when the CalculatorTool
is done, and have it's output returned to the client as is.
I also have have tools that return serialized data, for a graph chart, having that data re-processed by next iterations of the agent will make it invalid.
Upvotes: 7
Views: 6388
Reputation: 5458
In order to end the chain when the a langchain Tool
has completed running, the tool must be configured with return_redirect
boolean flag declared in BaseTool
to True
class CustomTool(BaseTool):
name = "Custom tool"
description = "Custom tool for some task"
def __init__(self):
super().__init__()
self.return_direct = True
def _run(self, work_order_id: str):
raise NotImplementedError("implement run function")
def _arun(self, radius: Union[int, float]):
raise NotImplementedError("This tool does not support async")
The chain will be terminated by the Agent
using the tool, when the tool is finished processing a given input. Returning the tool's output through AgentFinish
.
Upvotes: 7