shubham gaur
shubham gaur

Reputation: 11

Facing issue while using Conversable Agent Autogen

This is the Conversable Agent function:

def create_scoring_agent() -> ConversableAgent:
    """Creates an LLM-powered agent for extracting food and service scores from reviews."""
    scoring_agent_system_message = """
    You are an assistant that evaluates restaurant reviews. Your task is to:
    1. Identify adjectives describing food and customer service in the review.
    2. Assign scores based on the following rules:
       - Score 1: awful, horrible, disgusting
       - Score 2: bad, unpleasant, offensive
       - Score 3: average, uninspiring, forgettable
       - Score 4: good, enjoyable, satisfying
       - Score 5: awesome, incredible, amazing
    Return the result as a JSON object with "food_score" and "service_score".
    Example:
    Review: "The food was average, but the customer service was unpleasant."
    Output: {"food_score": 3, "service_score": 2}
    """
    return ConversableAgent(
        name="scoring_agent",
        system_message=scoring_agent_system_message,
        llm_config={
            "config_list": [
                {
                    "model": "gpt-4o-mini", 
                    "api_key": os.environ.get("OPENAI_API_KEY")  # Fetch API key from environment variable
                }
            ]
        }
    )

def analyze_reviews_with_agent(reviews: List[str]) -> Dict[str, List[int]]:
    """Uses an agent to analyze reviews and extract scores."""
    agent = create_scoring_agent()
    food_scores, service_scores = [], []

    for review in reviews:
        # Send the review to the agent for scoring
        user_message = f"Analyze this review: '{review}'. Extract food_score and service_score."
        print(user_message)
        **response = agent.initiate_chat([{"role": "user", "content": user_message}])**
        print(response)

        # Parse the agent's response
        try:
            result = eval(response["content"])  # Convert string to dictionary
            food_scores.append(result["food_score"])
            service_scores.append(result["service_score"])
        except Exception as e:
            print(f"Error processing review: {review}. Error: {e}")

    return {"food_scores": food_scores, "customer_service_scores": service_scores}

I was just initiating the agent chat so that it calls the gpt given the prompt.

For a better gist have a look at the screenshot. fails in the analyze_reviews_with_agent function.

analyze_reviews_with_agent function

Upvotes: 1

Views: 182

Answers (1)

Mohammad Talaei
Mohammad Talaei

Reputation: 412

When an LLM Agent starts interacting with other agents in the AutoGen framework, there must be other agents present. According to the AutoGen docs, the initiate_chat method requires at least one recipient, which should be an instance of ConversableAgent. Additionally, it is good to specify the message, max_turns, and summary_method arguments.

Here is an example of the sequential conversation pattern in AutoGen, which I assume is the one you intended to use:

import os
from autogen import ConversableAgent

llm_config = {"config_list": [{"model": "gpt-4o-mini", "api_key": os.environ.get("OPENAI_API_KEY")}]}
# the main entrypoint/supervisor agent
entrypoint_agent = ConversableAgent("entrypoint_agent", 
                                        system_message=entrypoint_agent_system_message, 
                                        llm_config=llm_config)
# other agents
data_fetch_agent = ConversableAgent("data_fetch_agent", 
                                      system_message=data_fetch_agent_system_message, 
                                        llm_config=llm_config)
review_analysis_agent = ConversableAgent("review_analysis_agent", 
                                        system_message=review_analysis_agent_system_message, 
                                        llm_config=llm_config)
scoring_agent = ConversableAgent("scoring_agent", 
                                        system_message=scoring_agent_system_message, 
                                        llm_config=llm_config)

result = entrypoint_agent.initiate_chats(
        [
            {
                "recipient": data_fetch_agent,
                "message": f"The question is: {user_query}",
                "max_turns": 2,
                "summary_method": "last_msg",
            },
            {
                "recipient": review_analysis_agent,
                "message": "These are the reviews",
                "max_turns": 1,
                "summary_method": "reflection_with_llm",
                "summary_args": {"summary_prompt":"some prompt"},
            },
            {
                "recipient": scoring_agent,
                "message": "These are raw scores",
                "max_turns": 2,
                "summary_method": "last_msg",
            },
        ]
    )

Upvotes: 0

Related Questions