Parth Sharma
Parth Sharma

Reputation: 31

How to load and save Langchain's memory model

I am trying to build a chat service that uses OpenAI as LLM and langchain for remembering the context.

The model I am using is "VectorStoreRetrieverMemory".

const memory = VectorStoreRetrieverMemory()

The backend is in Nodejs. Flow goes something like this :

This makes each call a very long since it has to add all previous messages at every message.

I wish to somehow save the memory object and just load that to pass it into the LLM call, updating it when returning a new message, and then saving the model again.

If there is a better way to do this pls do guide me.

I tried to find ways to save the model, but fail to find any.

Upvotes: 2

Views: 5167

Answers (2)

Yilmaz
Yilmaz

Reputation: 49661

Langchain has Conversation Summary

Now let's take a look at using a slightly more complex type of memory

  • ConversationSummaryMemory. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time. Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.

Be aware that there is a trade-off here. the response will take longer because you make two API calls. one to generate the original response, second to generate the summart

Upvotes: 0

Rodrigo Vega Moreno
Rodrigo Vega Moreno

Reputation: 321

LangChain comes with various types of memory that you can implement, depending on your application and use case (with links to LangChain's JS documentation):

  1. Conversation Buffer
  2. Conversation Buffer Window
  3. Entity
  4. Vector store-backed memory
  5. Conversation Summary
  6. Conversation Summary Buffer

You're on the right track, though keep in mind that, so far, there is no way to give history context/memory to the LLM other than storing the entire history and passing it to the LLM as context.

Upvotes: 1

Related Questions