Lokesh Joshi
Lokesh Joshi

Reputation: 21

LangChain error: "Groq does not currently support tool_choice='any'. Should be one of 'auto', 'none', or the name of the tool to call."

Here's my issue:

I was using Groq API with LLama 3.1 70B model to implement agentic RAG through LangGraph. For that, I had to install langchain_groq library. I installed the most updated one using pip install -U langchain_groq. Then I used the following code to load the model into a variable:

from langchain_groq import ChatGroq
llm=ChatGroq(api_key=groq_api_key, model_name="llama-3.1-70b-versatile", temperature=0.3)

Now I implemented Agentic RAG with 3 tools to choose from using LangGraph.

tools=[
    fetch_MOM_Docs,
    fetch_Action_Tracker,
    final_answer
]

While using its llm.bind_tools(tools, tool_choice='auto') code, it is giving me the following error:

ValueError: Groq does not currently support tool_choice='any'. Should be one of 'auto', 'none', or the name of the tool to call.

When I checked the code of langchain_groq's bind_tools function in the library, I came to understand that the library does not support "any" keyword for Groq. While it supports any for OpenAI and other services.

BUT, When I checked langchain's documentation it is clearly given in the 'Note' section of the documentation that any keyword is supported for Groq. See below given snapshot of the documentation. Langchain Documentation

Here's the link to the documentation: https://python.langchain.com/v0.1/docs/modules/model_io/chat/function_calling/

I want to use Groq only for this purpose as it is free and fast. Am I missing something or is the Langchain's library not updated as per the documentation?

Help me how can I workaround this issue. I have tried using "auto" keyword, but the problem is, on some prompts the agent does not use any of the tools, while I certainly want it to use atleast one of the tools.

How can I make it choose one of the tools to retrieve information based on its decision making while not ending its cycle without using any tools?

Upvotes: 0

Views: 565

Answers (1)

Rok Benko
Rok Benko

Reputation: 23050

You're looking at the LangChain 0.1 documentation while you're using the LangChain 0.2 SDK.

To check your LangChain Python SDK version, run the following command in the terminal:

pip show langchain

For LLM provider specifics, see the LangChain 0.2 API reference.

Further look into the LangChain's 0.2 Groq integration shows that the error message you get is expected. The tool_choice parameter of the bind_tools() function accepts the following (source):

bind_tools(tools: Sequence[Dict[str, Any] | Type[BaseModel] | Callable | BaseTool], *, tool_choice: dict | str | Literal['auto', 'any', 'none'] | bool | None = None, **kwargs: Any) → Runnable[PromptValue | str | Sequence[BaseMessage | List[str] | Tuple[str, str] | str | Dict[str, Any]], BaseMessage][source]

Bind tool-like objects to this chat model.

Parameters:

  • tools (Sequence[Dict[str, Any] | Type[BaseModel] | Callable | BaseTool]) – A list of tool definitions to bind to this chat model. Supports any tool definition handled by langchain_core.utils.function_calling.convert_to_openai_tool().

  • tool_choice (dict | str | Literal['auto', 'any', 'none'] | bool | None) – Which tool to require the model to call. Must be the name of the single provided function, “auto” to automatically determine which function to call with the option to not call any function, “any” to enforce that some function is called, or a dict of the form: {“type”: “function”, “function”: {“name”: <<tool_name>>}}.

  • **kwargs (Any) – Any additional parameters to pass to the Runnable constructor.

Return type:

Runnable[PromptValue | str | Sequence[BaseMessage | List[str] | Tuple[str, str] | str | Dict[str, Any]], BaseMessage]

As you can see, you need to set the tool_choice parameter to any if you want to enforce that some function is called when using LangChain's 0.2 Groq integration.

Upvotes: 0

Related Questions