LJF 2525
LJF 2525

Reputation: 49

OpenAI GPT-4 API: Why does gpt-4-0613 hallucinate (make up) function parameters?

I'm using the gpt-4-0613 model, with a single function, and some custom data in the system prompt.

If the function is triggered very early in the chat, within the first two requests, it functions just fine, and the API asks the user for the information required to call the function.

However, if the function is called later in the conversation, let's say question 5, the API will just make up answers and send back the function call.

How can I stop the AI from making up answers? There is no way for the API to get these values from the conversation context. They are all 100% made up.

completion = openai.ChatCompletion.create(
    model='gpt-4-0613',
    messages=prompts,
    functions=[
    {
        "name": "fill_form",
        "description": "Helps the user create an XYZ Report",
        "parameters": {
            "type": "object",
            "properties": {
                "name": {
                    "type": "string",
                    "description": "the full name of the person issuing this report"
                },
                "zip": {
                    "type": "string",
                    "description": "the 5 digit zip code of the address"
                },
                "address": {
                    "type": "string",
                    "description": "the street address, only the street and not the city, state or zip"
                },
                "year_end": {
                    "type": "string",
                    "description": "the full four digit year of the fiscal year"
                },
            },
            "required": ["name", "address", "year_end", "zip"]
        }
    }],
)

I've tried with and without the

function_call='auto'

option with no affect.

Thank you for any help.

The API should always ask the users for the values of the function and never make them up.

Upvotes: 4

Views: 2757

Answers (1)

Rok Benko
Rok Benko

Reputation: 22920

OpenAI acknowledges that the model may hallucinate (i.e., make up) function parameters, as stated in the official OpenAI documentation:

The basic sequence of steps for function calling is as follows:

  1. Call the model with the user query and a set of functions defined in the functions parameter.
  2. The model can choose to call a function; if so, the content will be a stringified JSON object adhering to your custom schema (note: the model may generate invalid JSON or hallucinate parameters).
  3. Parse the string into JSON in your code, and call your function with the provided arguments if they exist.
  4. Call the model again by appending the function response as a new message, and let the model summarize the results back to the user.

This can be mitigated with a system message, as stated in the official OpenAI documentation:

Hallucinated outputs in function calls can often be mitigated with a system message. For example, if you find that a model is generating function calls with functions that weren't provided to it, try using a system message that says: "Only use the functions you have been provided with."

Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages, as stated in the official OpenAI documentation. You can add a system message as follows:

Python

import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")

chatCompletion = openai.ChatCompletion.create(
  model = "gpt-3.5-turbo-0613",
  messages = [
        {"role": "system", "content": "Only use the functions you have been provided with."}
    ]
)

print(chatCompletion.choices[0].message.content)

NodeJS

Note: OpenAI NodeJS SDK v4 was released on August 16, 2023, and is a complete rewrite of the SDK. The code below differs depending on the version you currently have. See the v3 to v4 migration guide.

• If you have the OpenAI NodeJS SDK v3:

const { Configuration, OpenAIApi } = require("openai");

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});

const openai = new OpenAIApi(configuration);

const chatCompletion = await openai.createChatCompletion({
  model: "gpt-3.5-turbo-0613",
  messages: [{"role": "system", "content": "Only use the functions you have been provided with."}],
});

console.log(chatCompletion.data.choices[0].message.content);

• If you have the OpenAI NodeJS SDK v4:

import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

const chatCompletion = await openai.chat.completions.create({
  model: "gpt-3.5-turbo-0613",
  messages: [{"role": "user", "content": "Only use the functions you have been provided with."}],
});

console.log(chatCompletion.choices[0].message.content);

Also, try to force the model to call a specific function by setting function_call: {"name": "<insert-function-name>"}. As stated in the official OpenAI documentation:

Screenshot

In your case, it would look like this:

import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")

completion = openai.ChatCompletion.create(
    model = "gpt-3.5-turbo-0613",
    messages = messages,
    functions = functions,
    function_call = "fill_form",  # Add this
)

print(completion.choices[0].message.content)

Upvotes: 4

Related Questions