MirrahOH
MirrahOH

Reputation: 51

Is there a way to integrate vector embeddings in a Langhcain Agent

I'm trying to use the Langchain ReAct Agents and I want to give them my pinecone index for context. I couldn't find any interface that let me provide the LLM that uses the ReAct chain my vector embeddings as well.

Here I set up the LLM and retrieve my vector embedding.

llm = ChatOpenAI(temperature=0.1, model_name="gpt-4")
retriever = vector_store.as_retriever(search_type='similarity', search_kwargs={'k': k})

Here I start my ReAct Chain.

prompt = hub.pull("hwchase17/structured-chat-agent")
agent = create_structured_chat_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = agent_executor.invoke(
    {
        "input": question,
        "chat_history": chat_history
    }
)

Before using the ReAct Agent, I used the vector embedding like this.

crc = ConversationalRetrievalChain.from_llm(llm, retriever)
result = crc.invoke({'question': systemPrompt, 'chat_history': chat_history})
chat_history.append((question, result['answer']))

Is there any way to combine both methods and have a ReAct agent that also uses vector Embeddings?

Upvotes: 2

Views: 677

Answers (1)

Andrew Nguonly
Andrew Nguonly

Reputation: 2621

You can specify the retriever as a tool for the agent. Example:

from langchain.tools.retriever import create_retriever_tool


retriever = vector_store.as_retriever(search_type='similarity', search_kwargs={'k': k})

retriever_tool = create_retriever_tool(
    retriever,
    "retriever_name",
    "A detailed description of the retriever and when the agent should use it.",
)

tools = [retriever_tool]

agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

References:

  1. Agent > Retriever (LangChain)

Upvotes: 2

Related Questions