hridayesh
hridayesh

Reputation: 1143

Setting up chain correctly - Langchain js

I am looking for some help in setting up chain correctly, as I am new to LangChain.

I am creating a chatbot which uses RAG, MongoDB as a vector store, OpenAI, and get output in JSON format. I am trying to setup the following chain

Input: (query, conversation_history)

Input → Retrieval Prompt → OpenAI → Vector Store → Documents

Input, Documents, conversation_history → Chat Prompt → OpenAI → JSON Output

I was able to do the above except for JSON Output using some boilerplate code. Below is the code which worked for me.

async function generateAnswerPlain({company, user, query, prompt_}, {client}) {
    const namespace = "genai.embeddings";
    const [dbName, collectionName] = namespace.split(".");
    const collection = client.db(dbName).collection(collectionName);

    const embeddings = new OpenAIEmbeddings({
      openAIApiKey: OPENAI_KEY, // In Node.js defaults to process.env.OPENAI_API_KEY
    });

    const vectorStore = new MongoDBAtlasVectorSearch(embeddings, {
      collection,
      indexName: "embeddings_index", // The name of the Atlas search index. Defaults to "default"
      textKey: "text", // The name of the collection field containing the raw content. Defaults to "text"
      embeddingKey: "embedding", // The name of the collection field containing the embedded text. Defaults to "embedding"
    });

    const retriever = vectorStore.asRetriever({query:{company:company}});
    const chatModel = new ChatOpenAI({
    //  modelName: "gpt-3.5-turbo-1106",
      temperature: 0.5,
      openAIApiKey: OPENAI_KEY
    });
    const historyAwarePrompt = ChatPromptTemplate.fromMessages([
      new MessagesPlaceholder("chat_history"),
      ["user", "{input}"],
      [
        "user",
        "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation",
      ],
    ]);
    const historyAwareRetrieverChain = await createHistoryAwareRetriever({
      llm: chatModel,
      retriever,
      rephrasePrompt: historyAwarePrompt,
    });
    if(!CHAT_HISTORY_DB[user]) CHAT_HISTORY_DB[user] = [];
    const chatHistory = CHAT_HISTORY_DB[user];
    const historyAwareRetrievalPrompt = ChatPromptTemplate.fromMessages([
      [
        "system",
        prompt_+":\n\n{context}",
      ],
      new MessagesPlaceholder("chat_history"),
      ["user", "{input}"],
    ]);
    const historyAwareCombineDocsChain = await createStuffDocumentsChain({
      llm: chatModel,
      prompt: historyAwareRetrievalPrompt,
    });
    const conversationalRetrievalChain = await createRetrievalChain({
      retriever: historyAwareRetrieverChain,
      combineDocsChain: historyAwareCombineDocsChain,
    });
    let res = await conversationalRetrievalChain.invoke({
      chat_history: chatHistory,
      input: query
    });
    console.log(res);
    chatHistory.push(new HumanMessage(query));
    chatHistory.push(new AIMessage(res.answer));
    return res.answer;
}

I tried setting up JSON Output formatter & OpenAI functions but could not make it work.

Below is the output schema I was looking for.

const answerSchema = z.object({
    sentiment: z.enum(['positive', 'negative', 'neutral']).describe('Sentiment of users query to know whether user is angree or frustrated with the company'),
    transfer: z.boolean().describe('whether to transfer chat to a human agent, instead of handling via ai'),
    transfer_reason: z.enum(['negative_sentiment', 'no_knowledge', null]).describe('reason of tranferring this chat to a human agent'),
    answer: z.string().describe('Textual answer generate by AI'),
    options: z.string().max(24).array().max(5).optional().describe('Array of options to be selected by user, max 24 chars option and max 5 options. Only present if user has to make one of the choice'),
    mediaURL: z.string().optional().describe('Media URL if we need to send media to the user'),
    mediaType: z.enum(['image', 'video', 'audio', 'pdf']).optional().describe('Media type of media'),
    suggested_products: z.object({
        name: z.string().describe('product name'),
        summary: z.string().describe('very short 5-10 words summary of product'),
        mediaURL:z.string().optional().describe('Media URL of product'),
        mediaType: z.enum(['image', 'video']).describe('media type')
    }).array().max(3).optional().describe('products suggested if any. suggested_products and options both cant be present at the same time')
});

I tried changing the following code

    const historyAwareCombineDocsChain = await createStuffDocumentsChain({
      llm: chatModel,
      prompt: historyAwareRetrievalPrompt,
    });

Into

const historyAwareCombineDocsChain = createStructuredOutputRunnable({
  outputSchema: answerSchema,
  llm: chatModel,
  prompt: historyAwareRetrievalPrompt,
  outputParser: new JsonOutputFunctionsParser()
})

It gave answer in the desired format, but it broke document chain. OpenAI did not get documents but [object] [object] in place of document context.

Upvotes: 0

Views: 394

Answers (0)

Related Questions