esllmllama
esllmllama

Reputation: 1

Completions.create() got an unexpected keyword argument 'request_timeout'

I am using Autogen from microsoft with the below code:

import autogen
from autogen import AssistantAgent, UserProxyAgent

config_list = [
    {
        'model': 'gpt-4',
        'api_key': 'API_KEY'
    }
]

llm_config={
    "request_timeout": 600,
    "seed": 42,
    "config_list": config_list,
    "temperature": 0
}

assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config=llm_config,
    system_message="Chief technical officer of a tech company"
)

user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    human_input_mode="ALWAYS",
    max_consecutive_auto_reply=10,
    is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
    code_execution_config={"work_dir": "web"},
    llm_config=llm_config,
    system_message="""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
)

task = """
Write python code to output numbers 1 to 100
"""

user_proxy.initiate_chat(
    assistant,
    message=task
)

when I try to run the python, it gives me this error:

Completions.create() got an unexpected keyword argument 'request_timeout'

[autogen.oai.client: 09-05 14:32:12] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
[autogen.oai.client: 09-05 14:32:12] {164} WARNING - The API key specified is not a valid OpenAI format; it won't work with the OpenAI-hosted model.
user_proxy (to assistant):


Write python code to output numbers 1 to 100


--------------------------------------------------------------------------------
Traceback (most recent call last):
  File "c:\Users\HP\Desktop\prj\autogen-ve\Scripts\runningBots.py", line 42, in <module>

How to resolve this?

Upvotes: -1

Views: 1045

Answers (2)

Yash Kalia
Yash Kalia

Reputation: 1

from autogen import AssistantAgent, UserProxyAgent
from autogen.coding import DockerCommandLineCodeExecutor

config_list=[
    {
        'model':'gpt-4o',
        'api_key':'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'
    }
]


llm_config ={
    
    
    "config_list":config_list,
    # "temperature":0,

}


# new code
# create an AssistantAgent instance named "assistant" with the LLM configuration.
assistant = AssistantAgent(name="assistant", llm_config=llm_config)

# create a UserProxyAgent instance named "user_proxy" with code execution on docker.
code_executor = DockerCommandLineCodeExecutor()
user_proxy = UserProxyAgent(name="user_proxy", code_execution_config={"executor": code_executor})


user_proxy.initiate_chat(
    assistant,
    message="""What date is today? Which big tech stock has the largest year-to-date gain this year? How much is the gain?""",
)

Update version if someone wants to run a similar code

Upvotes: 0

esllmllama
esllmllama

Reputation: 1

Please change your code to the following in order to make it work:

# config_list = [
#     {
#         'model': 'gpt-4o',
#         'api_key': 'API_KEY_HERE'
#     }
# ]
llm_config={"config_list": config_list}

After this it should throw the error: you do not have access to the model, after which you should make a minimum payment of 5$ to open ai to access it

Upvotes: -1

Related Questions