Sahil
Sahil

Reputation: 11

Inconsistent JSON parsing when calling LangChain tools from a Structured Chat Agent

I'm using LangChain's Structured Chat Agent and tools to build an analytics assistant. One of the tools I've defined is get_social_insights, which fetches Instagram insights for the account. However, when I try to use this tool, I'm encountering inconsistent JSON parsing issues. Specifically:

When the get_social_insights tool is called, the agent's output is wrapped in extra backticks, like this:

   ```json
   { 
      "action": "get_customer_chatbot_conversations", 
      "action_input": "" 
   }
   ```
     "action_input": {}
   }
   ```

But when other tools (like get_customer_chatbot_conversations) are called, the JSON formatting is correct:

   ```json
   {
      "action": "get_device_model_data",
      "action_input": {}
   }
   ```

As a result, I'm getting OutputParserException errors when trying to use the get_social_insights tool.

Here is the relevant part of the code:

from langchain_google_genai import ChatGoogleGenerativeAI 
from langchain_community.chat_message_histories import ChatMessageHistory 
from langchain_core.runnables.history import RunnableWithMessageHistory 
from langchain_core.tools import tool 
from langchain.agents import AgentExecutor, create_structured_chat_agent

def initialize_llm():
    llm = ChatGoogleGenerativeAI(model=LLM_MODEL, temperature=0, )
    return llm

@tool
def get_device_model_data(n : int = 10):
    """
    Fetches the data for device models
    args:
        int: The number of devices to return.
    """
    client = BetaAnalyticsDataClient()

    today = datetime.today().strftime('%Y-%m-%d')

    request = RunReportRequest(
        property=f"properties/{PROPERTY_ID}",
        dimensions=[Dimension(name="deviceModel")],
        metrics=[Metric(name="activeUsers")], 
        date_ranges=[DateRange(start_date="2024-01-01", end_date=today)],
        limit = n
    )

    response = client.run_report(request)

    device_data_str = "\n".join(
    
    f"{row.dimension_values[0].value} - {row.metric_values[0].value}"

    for row in response.rows
    )
    device_data_str = "Device Model - Active Users\n" + device_data_str

    return device_data_str

@tool
def get_social_insights(dummy: dict) -> str:
    """
    Fetches company's Instagram account insights.
    Args:
        dummy (dict): placeholder parameter to maintain consistency with tool schema.
    Returns:
        str: Formatted Instagram metrics
    """
    try:
        url = f'https://graph.facebook.com/{instagram_account_id}/insights'
        params = {
            'metric': 'impressions,reach,profile_views,total_interactions,profile_links_taps,website_clicks,accounts_engaged,saves,shares',
            'period': 'day',  
            'metric_type': 'total_value',
            'access_token': access_token
        }

        print("Calling Instagram API...")  # Debug print
        response = requests.get(url, params=params)
        print(f"Response status: {response.status_code}")  # Debug print

        if response.status_code == 200:
            data = response.json()
            metrics_str = "Instagram Metrics:\n"
            for item in data['data']:
                metrics_str += f"{item['title']}: {item['total_value']['value']}\n"
            return metrics_str
        else:
            error_msg = f"Error: {response.status_code}"
            print(error_msg)  # Debug print
            return error_msg

    except Exception as e:
        error_msg = f"Error fetching Instagram insights: {str(e)}"
        print(error_msg)  # Debug print
        return error_msg

def initialize_langchain_agent(llm, id):    

    memory = ChatMessageHistory(session_id=id)
    
    tools = [get_social_insights, 
            get_device_model_data
            #  other tools
            ]
    prompt = hub.pull("admin-structured-chat-agent-prompt")
    agent = create_structured_chat_agent(llm, tools, prompt)

    agent_executor = AgentExecutor.from_agent_and_tools(
        agent=agent, 
        tools=tools, 
        handle_parsing_errors=True,
        verbose=True)
    
    agent_with_chat_history = RunnableWithMessageHistory(
        agent_executor,
        lambda session_id: memory,
        input_messages_key="input",
        history_messages_key="chat_history",
    )
    
    return agent_with_chat_history

def get_agent(id: str):
    llm = initialize_llm()
    return initialize_langchain_agent(llm, id)

agent_executor1 = get_agent("admin1")

while True:

    user_input = input("User: ")

    response = agent_executor1.invoke(
        {"input": user_input,
        "date": datetime.now().strftime('%Y-%m-%d'),},
        config={"configurable": {"session_id": "admin1"}},
    )

    print(response['output'])

How can I prevent this from happening, the LLM is messing up the tool call only for this tool, others work fine. I have another tool that takes a dummy input, it works fine.

I've tried the following to address the issue:

This used to work a week ago, but I changed the access token of the API and it's not working now. The API tool code when ran alone works too, so thats not the issue.

When the tool is called, nothing inside it is printed indicating it has to do with the tool call only.

Any help is appreciated.

Upvotes: 1

Views: 45

Answers (0)

Related Questions