ryan-serpico
ryan-serpico

Reputation: 31

Is there a way to save the state of an entire conversation with Langchain, including prompts?

hope you're having a great day. I would really appreciate if anyone here has the time to help me understand memory in LangChain.

At a high level, what I want to be able to do is save the state of an entire conversation to a JSON file on my own machine — including the prompts from a ChatPromptTemplate.

When I use conversation.memory.chat_memory.messages, and messages_to_dict(extracted_messages), I'm only getting a subset of the conversation.

This is what I have so far, using some Nickelodeon prompting text as an example:

import json

import openai
from langchain import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.prompts import (
    AIMessagePromptTemplate,
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    MessagesPlaceholder,
    SystemMessagePromptTemplate,
)
from langchain.schema import (
    AIMessage,
    HumanMessage,
    SystemMessage,
    messages_from_dict,
    messages_to_dict,
)

OPENAI_API_KEY = "sk-****"

story_context = "Who is the main character in The Fairly OddParents?"

prompt = ChatPromptTemplate(
    messages=[
        SystemMessagePromptTemplate.from_template(
            "You are an expert on Nickelodeon cartoons."
        ),
        MessagesPlaceholder(variable_name="chat_history"),
        HumanMessagePromptTemplate.from_template(
            'Below you will find a question about Nickelodeon cartoons delimited by triple quotes ("""). Answer the question in a consistent style, tone and voice.\n\n"""What is the name of the main character in the cartoon Spongebob Squarepants?"""'
        ),
        AIMessagePromptTemplate.from_template(
            "Spongebob Squarepants."
        ),
        HumanMessagePromptTemplate.from_template(
            'Now do the same for this snippet, following a consistent style, tone and voice.\n\n"""{text}"""'
        ),
    ]
)

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

llm = ChatOpenAI(
    openai_api_key=OPENAI_API_KEY,
    model="gpt-3.5-turbo",
    temperature=0,
    max_retries=5,
)

# Create the LLMChain.
conversation = LLMChain(llm=llm, prompt=prompt, verbose=False, memory=memory)

conversation.predict(text=story_context)

# Let's save the conversation to a dictionary
extracted_messages = conversation.memory.chat_memory.messages
memory_dict = messages_to_dict(extracted_messages)

# Pretty print the dictionary
print(json.dumps(memory_dict, indent=2))

The above code prints the following to my terminal:

[
  {
    "type": "human",
    "data": {
      "content": "Who is the main character in The Fairly OddParents?",
      "additional_kwargs": {},
      "example": false
    }
  },
  {
    "type": "ai",
    "data": {
      "content": "The main character in The Fairly OddParents is Timmy Turner.",
      "additional_kwargs": {},
      "example": false
    }
  }
]

This isn't the entire contents of the conversation. I also want the system and human prompts from the ChatPromptTemplate ("You are an expert on Nickelodeon cartoons.", "Below you will find a question about Nickelodeon cartoons …" ).

I don't believe extracted_messages = conversation.memory.chat_memory.messages will get me to where I need to go, but I don't know any other way to go about this.

Like I said, I would really appreciate any and all help on this. I feel like I'm going crazy trying to figure it out!

Upvotes: 1

Views: 2492

Answers (2)

Osama Hussein
Osama Hussein

Reputation: 129

I believe there are plenty of way to achieve the objective, basing on the code provided in the question specifically this part

# Let's save the conversation to a dictionary
extracted_messages = conversation.memory.chat_memory.messages
memory_dict = messages_to_dict(extracted_messages)

we can loop over the memory_dict variable and extract the needed messages and system prompt accordingly. as follows:

for msg in memory_dict:
  msg_content = msg['data']['content']
  if msg['type'] == 'human':
    if 'list' in str(type(msg_content)):
      # The system prompt
      print('##System Prompt: ', msg_content[0]['content'])
      # The User Initial Question
      print('User: ', msg_content[1]['content'])
    else:
      print('User: ', msg_content)
  elif msg['type'] == 'ai':
    print('AI: ', msg_content)

I hope it saved someone's time :)

Upvotes: 0

ZKS
ZKS

Reputation: 2856

This is not supported by default, you need to build custom code for achieving this

You can properly modularize it, you can extend the ConversationBufferMemory class and add new method for storing the messages

Add this imports and method

from langchain.prompts import (
        AIMessagePromptTemplate,
        ChatPromptTemplate,
        HumanMessagePromptTemplate,
        SystemMessagePromptTemplate,
    )
 from langchain.schema.messages import (
    AIMessage,
    HumanMessage,
    SystemMessage,
   
   )

 def load_initial_messages(self, prompt: ChatPromptTemplate):
        
    for message_template in prompt.messages:
    
        if hasattr(message_template, 'format'):
            try:
                formatted_message = message_template.format()
            except KeyError:
                continue

            if isinstance(message_template, SystemMessagePromptTemplate):
                self.chat_memory.messages.append(SystemMessage(content=formatted_message.content))
            elif isinstance(message_template, HumanMessagePromptTemplate):
                self.chat_memory.messages.append(HumanMessage(content=formatted_message.content))
            elif isinstance(message_template, AIMessagePromptTemplate):
                self.chat_memory.messages.append(AIMessage(content=formatted_message.content))

Add these lines in your actual code

conversation({"text":story_context})
memory.load_initial_messages(prompt) # This is new line, you memory_dict will now have all messages

Upvotes: 1

Related Questions