Julian
Julian

Reputation: 1

AzureChatOpenAI only uses one tool at a time

LangChain with AzureChatOpenAI is only ever calling one tool at a time.

When prompting the model to multiply and add two sets of numbers, I expect two tool calls, however only one tool is called, without an obvious pattern which of the tools is called.

I'm following this example from the official LangChain Documentation.

from langchain_core.tools import tool
from langchain_core.messages import HumanMessage, ToolMessage
from langchain_openai import AzureChatOpenAI
model = AzureChatOpenAI(
    azure_endpoint=api_base,
    openai_api_key=api_key,
    api_version=api_version,
    deployment_name=chat_deployment_name,
    temperature=0.3,
)

@tool
def add(a: int, b: int) -> int:
    """Adds a and b."""
    return a + b


@tool
def multiply(a: int, b: int) -> int:
    """Multiplies a and b."""
    return a * b


tools = [add, multiply]
llm_with_tools = model.bind_tools(tools)

query = "multiply 34 and 79. Also add 2 and 7."

messages = [HumanMessage(query)]
ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)
for tool_call in ai_msg.tool_calls:
    selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()]
    tool_msg = selected_tool.invoke(tool_call)
    messages.append(tool_msg)

messages.append(llm_with_tools.invoke(messages))
for msg in messages:
    print(msg, "\n")

actual output:

[HumanMessage(content='multiply 34 and 79. Also add 2 and 7.'), 
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6vEJMVX4TBHwRhqb1NQUoUZs', 'function': {'arguments': '{\n  "a": 34,\n  "b": 79\n}', 'name': 'multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 87, 'total_tokens': 108}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'tool_calls', 'logprobs': None, 'content_filter_results': {}}, id='run-a8889a20-acf8-471d-aa78-73cea8b8c2de-0', tool_calls=[{'name': 'multiply', 'args': {'a': 34, 'b': 79}, 'id': 'call_6vEJMVX4TBHwRhqb1NQUoUZs', 'type': 'tool_call'}], usage_metadata={'input_tokens': 87, 'output_tokens': 21, 'total_tokens': 108}), 
ToolMessage(content='2686', name='multiply', tool_call_id='call_6vEJMVX4TBHwRhqb1NQUoUZs')]

Upon invoking the model with the above messages a second time, it seems to be missing the second tool answer and returns an empty string. Interestingly however, it then does the second funtion call.

llm_with_tools.invoke(messages)

content='' additional_kwargs={'tool_calls': [{'id': 'call_TB6Ioax1ExQF3lHbmLmrBtdF', 'function': {'arguments': '{"a": 2, "b": 7}', 'name': 'add'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 113, 'total_tokens': 130}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'tool_calls', 'logprobs': None, 'content_filter_results': {}} id='run-78784e10-497b-4444-886b-f37e5113fd85-0' tool_calls=[{'name': 'add', 'args': {'a': 2, 'b': 7}, 'id': 'call_TB6Ioax1ExQF3lHbmLmrBtdF', 'type': 'tool_call'}] usage_metadata={'input_tokens': 113, 'output_tokens': 17, 'total_tokens': 130}

Version Info: Python 3.12.3

langchain                          0.2.6
langchain-community                0.2.6
langchain-core                     0.2.23
langchain-openai                   0.1.17
langchain-text-splitters           0.2.2

Has anyone else experienced this and can point me in the right direction? Thanks a lot.

Upvotes: 0

Views: 285

Answers (1)

jowid
jowid

Reputation: 1

I had a similar issue with GPT-3.5. Switching to GPT-4o resolved it for me. Not sure why, but it worked.

Upvotes: 0

Related Questions