Loc
Loc

Reputation: 11

How to present the model's response content when using Few-shot prompting with function calling in LangChain and GPT-4 API?

I referred to an example from LangChain about using few-shot prompting along with function calling for my project. The following code snippet is taken from LangChain's documentation:

from langchain_core.tools import tool
@tool
def add(a: int, b: int) -\> int:
      """Adds a and b.

      Args:
       a: first int
       b: second int
       """
       return a + b

 @tool
 def multiply(a: int, b: int) -\> int:
    """Multiplies a and b.

     Args:
    a: first int
    b: second int
    """
    return a * b

 tools = [add, multiply]

 import getpass
 import os

 os.environ\["OPENAI_API_KEY"\] = getpass.getpass()
 from langchain_openai import ChatOpenAI

 llm = ChatOpenAI(model="gpt-3.5-turbo-0125")

 from langchain_core.messages import AIMessage
 from langchain_core.prompts import ChatPromptTemplate
 from langchain_core.runnables import RunnablePassthrough

 examples = [
   HumanMessage(
      "What's the product of 317253 and 128472 plus four", name="example_user"
    ),
   AIMessage(
     "",
      name="example_assistant",
      tool_calls=[
      {"name": "multiply", "args": {"x": 317253, "y": 128472}, "id": "1"}
      ],
      ),
      ToolMessage("16505054784", tool_call_id="1"),
      AIMessage(
       "",
      name="example_assistant",
      tool_calls=[{"name": "add", "args": {"x": 16505054784, "y": 4}, "id": "2"}],
       ),
       ToolMessage("16505054788", tool_call_id="2"),
       AIMessage(
       "The product of 317253 and 128472 plus four is 16505054788",
       name="example_assistant",
       ),
   ]

  system = """You are bad at math but are an expert at using a calculator.
  Use past tool usage as an example of how to correctly use the tools."""
  few_shot_prompt = ChatPromptTemplate.from_messages(
                    [
                      ("system", system),
                      *examples,
                      ("human", "{query}"),
                     ]
                     )

  chain = {"query": RunnablePassthrough()} | few_shot_prompt | llm_with_tools
  chain.invoke("Whats 119 times 8 minus 20")

But the output result of the model is shown below:
AIMessage(content='', additional_kwargs={'tool_calls': \[{'id': 'call_vjHdn3CaykuJlxu1YEDfQXFd', 'function': {'arguments': '{"a":119,"b":8}', 'name': 'multiply'}, 'type': 'function'\], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 242, 'total_tokens': 259}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-68b3d1e3-84fe-40da-bca2-2dbdb5845e43-0', tool_calls=\[{'name': 'multiply', 'args': {'a': 119, 'b': 8}, 'id': 'call_vjHdn3CaykuJlxu1YEDfQXFd', 'type': 'tool_call'}\], usage_metadata={'input_tokens': 242, 'output_tokens': 17, 'total_tokens': 259})

Why is the model not providing any response? Why is the content an empty string?

I tried testing again with a simple "hello" to see if there was any issue.

chain.invoke("hello")
###Output is: AIMessage(content='Hello, I am a computation expert. Do you need any assistance?', ....)

I misunderstood where I went wrong. How can I display the result of a computation using the few-shot prompting technique?

Upvotes: 0

Views: 72

Answers (1)

Arindam
Arindam

Reputation: 81

hmm. Code looks fine.

Can you try removing "**examples" entirely from the prompt template definition. Just keep the system and human keys.

report here on what happens

Upvotes: 0

Related Questions