Matan Dobrushin
Matan Dobrushin

Reputation: 195

OpenAI Completions API response content is not full

I have a python script that uses the official OpenAI library, that I'm using to generate some scripts by a specific recipe I'm telling it.

I thought the problem was I'm not streaming the response so now it is also uses the stream=True option.

Now, if the response is large (more then ~515 chars), the response is just cut half way.

Any ideas?

Here is a snipp (not real code)

    response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {
            "role": "system",
            "content": "You will be provided with some instructions for logic to implement in windows Batch"
                       "Please make it machine readable without explanations"
        },
        {
            "role": "user",
            "content": f'instructions: {instructions}'
        }
    ],
    temperature=0,
    max_tokens=512,
    top_p=1,
    stream=True
)
command = ''
try:
    for chunk in response:
        command += chunk.choices[0].delta.content
except:
    pass
print(command)

Upvotes: 0

Views: 480

Answers (2)

Add flush=True on your print statement:

client=OpenAI(api_key=key)
response=client.chat.completions.create(
    model="gpt-4o-mini",
    stream=True,
    messages=[
    {
    "role":"user",
    "content":"count to 500"
    }
])
for chunk in response:
    print(chunk.choices[0].delta.content,flush=True,end='')    

Upvotes: 0

Matan Dobrushin
Matan Dobrushin

Reputation: 195

Update - It looks like an issue with GPT 4, The solution for me was just to switch to model="gpt-3.5-turbo-1106"

Upvotes: 0

Related Questions