MMsmithH
MMsmithH

Reputation: 447

Autogen connecting to config file errors

Errors connecting to config files for models using autogen, seems to be dependent on the python code I am executing.

My goal is to have one config file that will work for any autogen project.

I'm able to get execute a basic conversation between assistant and user_proxy a but adapting the python script to autoagent triggers this error:

    docker run -it --rm autogen-project
Traceback (most recent call last):
  File "autogen_agentbuilder.py", line 6, in <module>
    config_list = autogen.config_list_from_json(config_path)
  File "/usr/local/lib/python3.8/site-packages/autogen/oai/openai_utils.py", line 458, in config_list_from_json
    with open(config_list_path) as json_file:
FileNotFoundError: [Errno 2] No such file or directory: 'OAI_CONFIG_LIST.json'

To better isolate the issue, I ran the code in docker (thinking this will reduce any errors triggered by my computer)

I used code provided in two demos on you tube. The first demo was to get assistant and user_proxy write some code inside of docker. I followed implementation of this code step by step and it worked.

But then I tried to use the same setup with some adapted code from another demo to run AgentBuilder() and this failed.

Since the second code had a bit of a different setup.

Working.py

    import autogen

# import OpenAI API key
config_list = autogen.config_list_from_json(env_or_file="OAI_CONFIG_LIST")

# create the assistant agent
assistant = autogen.AssistantAgent(
name="assistant", llm_config={"config_list": config_list}
)

# Create the user proxy agent
user_proxy = autogen.UserProxyAgent(
    name="UserProxy", code_execution_config={"work_dir": "results"}
)

# Start the conversation
user_proxy.initiate_chat(
    assistant, message="Write a code to print odd numbers from 2 to 100."
)

OAI_CONFIG_LIST

[
{
    "model": "gpt-3.5-turbo",
    "api_key": "Ap-12345678912234455"
}

]

Error.py

  # import OpenAI API key
  config_list = autogen.config_list_from_json(env_or_file="OAI_CONFIG_LIST")
  **default_llm_config = {'temperature': 0}**
 

# 2. Initializing Builder
builder = AgentBuilder(config_path=config_path)

# 3. Building agents
building_task = "Find a paper on arxiv by programming, and analyze its application in some domain..."
agent_list, agent_configs = builder.build(building_task, default_llm_config)

# 4. Multi-agent group chat
group_chat = autogen.GroupChat(agents=agent_list, messages=[], max_round=12)
manager = autogen.GroupChatManager(groupchat=group_chat, llm_config={"config_list": config_list, **default_llm_config})
agent_list[0].initiate_chat(
    manager, 
    message="Find a recent paper about gpt-4 on arxiv..."

After the template code triggered an error related to the config file, I changed tried to adapt the python code to the above so that it would mirror the Working.py, however I still encountered the error. The only thing I changed was the llm model from gpt-3.5-turbo to gpt-4 and adding the default_llm_config = {'temperature': 0} to the third line of code in configuration.

# 1. Configuration
    config_path = 'OAI_CONFIG_LIST.json'
    config_list = autogen.config_list_from_json(config_path)
    **default_llm_config = {'temperature': 0}**

then it was changed to mirror the working.py

# import OpenAI API key
  config_list = autogen.config_list_from_json(env_or_file="OAI_CONFIG_LIST")
  default_llm_config = {'temperature': 0}

OAI_CONFIG_LIST

[
{
    "model": "gpt-4",
    "api_key": "Ap-12345678912234455"
}
]

I can't understand why with virtually the same code pulling in the config files values it is triggering an error?

SUMMARY

Upvotes: 2

Views: 836

Answers (0)

Related Questions