E.K.
E.K.

Reputation: 4349

How to Control Sequence Length in "load_summarize_chain" with "map_reduce" from langchain? - Exceeding Maximum Token Indices Error

Could you please explain the way to control "sequence length" when we use map_reduce with load_summarize_chain from langchain?

from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter

splitter = RecursiveCharacterTextSplitter(
        chunk_size=600,
        chunk_overlap=0,
        length_function=len,
)
docs = splitter.create_documents([text])

summary_chain = load_summarize_chain(
        llm=llm,
        chain_type="map_reduce",
        map_prompt="some map prompt",
        combine_prompt="some combine prompt",
)
summary_chain.run(docs)

This is the code returns an error.

Token indices sequence length is longer than the specified maximum sequence length for this model (4084 > 1024). Running this sequence through the model will result in indexing errors. 

My guess is that this occurs when the mapper generates outputs that are too long. Any insights would be appreciated. Thanks!

Upvotes: 1

Views: 1826

Answers (1)

ZKS
ZKS

Reputation: 2846

More technical answer, this is the working code with the output sample. Also make sure you have all settings on your machine like environment variables etc. setup on your machine

        from langchain.chains.mapreduce import MapReduceChain
        from langchain.text_splitter import CharacterTextSplitter
        from langchain.chains import ReduceDocumentsChain, MapReduceDocumentsChain

        from langchain.chat_models import ChatOpenAI
        from langchain.document_loaders import WebBaseLoader
        from langchain.chains.summarize import load_summarize_chain
        from langchain.chains.llm import LLMChain
        from langchain.prompts import PromptTemplate
        from langchain.chains.combine_documents.stuff import StuffDocumentsChain

        loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
        docs = loader.load()

        llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")
        chain = load_summarize_chain(llm, chain_type="stuff")

        chain.run(docs)

        llm = ChatOpenAI(temperature=0)

        # Map
        map_template = """The following is a set of documents
        {docs}
        Based on this list of docs, please identify the main themes 
        Helpful Answer:"""
        map_prompt = PromptTemplate.from_template(map_template)
        map_chain = LLMChain(llm=llm, prompt=map_prompt)

        # Reduce
        reduce_template = """The following is set of summaries:
        {doc_summaries}
        Take these and distill it into a final, consolidated summary of the main themes. 
        Helpful Answer:"""
        reduce_prompt = PromptTemplate.from_template(reduce_template)

        # Run chain
        reduce_chain = LLMChain(llm=llm, prompt=reduce_prompt)

        # Takes a list of documents, combines them into a single string, and passes this to an LLMChain
        combine_documents_chain = StuffDocumentsChain(
            llm_chain=reduce_chain, document_variable_name="doc_summaries"
        )

        # Combines and iteravely reduces the mapped documents
        reduce_documents_chain = ReduceDocumentsChain(
            # This is final chain that is called.
            combine_documents_chain=combine_documents_chain,
            # If documents exceed context for `StuffDocumentsChain`
            collapse_documents_chain=combine_documents_chain,
            # The maximum number of tokens to group documents into.
            token_max=4000,
        )


        # Combining documents by mapping a chain over them, then combining results
        map_reduce_chain = MapReduceDocumentsChain(
            # Map chain
            llm_chain=map_chain,
            # Reduce chain
            reduce_documents_chain=reduce_documents_chain,
            # The variable name in the llm_chain to put the documents in
            document_variable_name="docs",
            # Return the results of the map steps in the output
            return_intermediate_steps=False,
        )

        text_splitter = CharacterTextSplitter.from_tiktoken_encoder(
            chunk_size=1024, chunk_overlap=0
        )
        split_docs = text_splitter.split_documents(docs)

        print(map_reduce_chain.run(split_docs))

enter image description here

Upvotes: -2

Related Questions