Krishnang K Dalal
Krishnang K Dalal

Reputation: 2556

Suggestion to make ReAct LLM Agent stop the thought loop

I've created a ReAct agent that works well but at time goes into an endless thought loop even after arriving to the final answer. Sometimes, it stops generating the response and cuts off. I need some suggestions on what strategy and practices enables more robust response from the agent. Below is my implementation:

def search_agent(memory):
    tools = [TavilySearchResults(max_results=5), DateTimeTool(), PubmedQueryRun(), SemanticScholarQueryRun()]
    callbacks = [StreamingStdOutCallbackHandler()]

    repo_id = "Qwen/Qwen2.5-72B-Instruct"

    template = """
    You are a helpful agent capable of answering input questions. Below is the history of the conversation so far:
    {chat_history}

    You have access to the following tools which you can utilize to fetch information if required. Try different tools if you can't arrive at the final answer 
    but only utilize these tools if necessary.
    
    {tools}
    
    Once you have the final answer, do not perform further actions.

    Use the following format:

    Question: the input question you must answer
    Thought: you should always think about what to do
    Action: the action to take, if answer is not found, itshould be one of [{tool_names}]
    Action Input: the input to the action
    Observation: the result of the action
    ... (this Thought/Action/Action Input/Observation can repeat at most 5 times)
    Thought: I now know the final answer
    Final Answer: the final answer to the original input question.

    Begin!

    Question: {input}
    Thought:{agent_scratchpad}
    """

    search_prompt = PromptTemplate.from_template(template=template)
    # hub.pull("hwchase17/react")

    llm = HuggingFaceEndpoint(repo_id=repo_id, 
                            huggingfacehub_api_token=hf_api_key,
                            # max_new_tokens=1000,
                            top_k=30,
                            # temperature=0.01,
                            callbacks=callbacks,
                            streaming=True)

    agent = create_react_agent(llm, tools, search_prompt)

    agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=False, handle_parsing_errors=True, memory=memory)
    return agent_executor

I tried different prompt formats but it still ends up in a loop before providing the final answer. Thanks a lot in advance for any help.

Upvotes: 0

Views: 67

Answers (0)

Related Questions