Maxl Gemeinderat
Maxl Gemeinderat

Reputation: 555

Map_Reduce prompt with RetrievalQA Chain

in the code below you see how I built my RAG model with the ParentDocumentRetriever from Langchain with Memory. At the moment I am using the RetrievalQA-Chain with the default chain_type="stuff". However I want to try different chain types like "map_reduce". But when replacing chain_type="map_reduce" and creating the Retrieval QA chain, I get the following Error:

ValidationError: 1 validation error for RefineDocumentsChain
prompt
  extra fields not permitted (type=value_error.extra)

I am assuming that my Prompt is not built correctly, but how do I have to change it to make it work? I saw that two different prompts are required for "map_reduce": "map_prompt" and "combine_prompt". But I am not sure how I have to change the prompts for a typical RAG retrieval task, where a user can interact with the Model and ask the model to answer questions for him. Here is my code:

from langchain.chains import RetrievalQA
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from langchain.callbacks.manager import CallbackManager
from langchain.document_loaders import PyPDFLoader, DirectoryLoader

loader = DirectoryLoader("MY_PATH_TO_PDF_FILES",
                         glob='*.pdf',
                         loader_cls=PyPDFLoader)
documents = loader.load()

# This text splitter is used to create the parent documents - The big chunks
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=400)

# This text splitter is used to create the child documents - The small chunks
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)

# The vectorstore to use to index the child chunks
from chromadb.errors import InvalidDimensionException
try:
    vectorstore = Chroma(collection_name="split_parents", embedding_function=bge_embeddings, persist_directory="chroma_db")
except InvalidDimensionException:
    Chroma().delete_collection()
    vectorstore = Chroma(collection_name="split_parents", embedding_function=bge_embeddings, persist_directory="chroma_db")

# The storage layer for the parent documents
store = InMemoryStore()

big_chunks_retriever = ParentDocumentRetriever(
    vectorstore=vectorstore,
    docstore=store,
    child_splitter=child_splitter,
    parent_splitter=parent_splitter,
)

big_chunks_retriever.add_documents(documents)

qa_template = """
Use the following information from the context (separated with <ctx></ctx>) to answer the question.
Answer in German only, because the user does not understand English! \
If you don't know the answer, answer with "Unfortunately, I don't have the information." \
If you don't find enough information below, also answer with "Unfortunately, I don't have the information." \
------
<ctx>
{context}
</ctx>
------
<hs>
{chat_history}
</hs>
------
{query}
Answer:
"""

prompt = PromptTemplate(template=qa_template,
                            input_variables=['context','history', 'question'])

chain_type_kwargs={
        "verbose": True,
        "prompt": prompt,
        "memory": ConversationSummaryMemory(
            llm=build_llm(),
            memory_key="history",
            input_key="question",
            return_messages=True)}

refine = RetrievalQA.from_chain_type(llm=build_llm(),
                                 chain_type="map_reduce",
                                 return_source_documents=True,
                                 chain_type_kwargs=chain_type_kwargs,
                                 retriever=big_chunks_retriever,
                                 verbose=True)

query = "Hi, I am Max, can you help me??"
refine(query)

Upvotes: 1

Views: 4167

Answers (2)

Ronnie
Ronnie

Reputation: 1

You can try this out:

map_template = "Write a summary of the following text:\n\n{text}"
map_prompt_template = PromptTemplate(template=map_template, input_variables=["text"])

combine_template = "Write an answer to the following question based on these summaries:\n\n{question}\n\n{texts}"
combine_prompt_template = PromptTemplate(template=combine_template, input_variables=["question", "texts"])

qa_model = RetrievalQA.from_chain_type(llm=llm, chain_type="map_reduce", retriever=retriever, chain_type_kwargs={"map_prompt": map_prompt_template, "combine_prompt": combine_prompt_template})

Upvotes: 0

lif cc
lif cc

Reputation: 461

You have almost reach the answer. Depends on what you what to do, let's see the code below:

qa_template = """
Use the following information from the context (separated with <ctx></ctx>) to answer the question.
Answer in German only, because the user does not understand English! \
If you don't know the answer, answer with "Unfortunately, I don't have the information." \
If you don't find enough information below, also answer with "Unfortunately, I don't have the information." \
------
<ctx>
{context}
</ctx>
------
<hs>
{chat_history}
</hs>
------
{question}
Answer:
"""

prompt = PromptTemplate(template=qa_template,
                            input_variables=['context','history', 'question'])
combine_custom_prompt='''
Generate a summary of the following text that includes the following elements:

* A title that accurately reflects the content of the text.
* An introduction paragraph that provides an overview of the topic.
* Bullet points that list the key points of the text.
* A conclusion paragraph that summarizes the main points of the text.

Text:`{context}`
'''


combine_prompt_template = PromptTemplate(
    template=combine_custom_prompt, 
    input_variables=['context']
)

chain_type_kwargs={
        "verbose": True,
        "question_prompt": prompt,
        "combine_prompt": combine_prompt_template,
        "combine_document_variable_name": "context",
        "memory": ConversationSummaryMemory(
            llm=OpenAI(),
            memory_key="history",
            input_key="question",
            return_messages=True)}


refine = RetrievalQA.from_chain_type(llm=OpenAI(),
                                 chain_type="map_reduce",
                                 return_source_documents=True,
                                 chain_type_kwargs=chain_type_kwargs,
                                 retriever=big_chunks_retriever,
                                 verbose=True)

Upvotes: 3

Related Questions