Reputation: 1
I'm working on a project where I need to use Langchain with the Google Gemini model (ChatGoogleGenerativeAI), but I'm having trouble with memory retention across multiple invocations.
I want to maintain context between user messages, meaning that the agent should remember previous conversations when I invoke it with new prompts (i.e., having a chat history per thread).
Here is my current code setup:
import type { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import {
MemorySaver,
} from "@langchain/langgraph";
import {
createReactAgent,
} from "@langchain/langgraph/prebuilt";
import type { Tool } from "langchain/tools";
interface AgentInfo{
id: string,
name: string,
job: string,
sex: 'male' | 'female',
}
class Agent {
private model: BaseChatModel;
private tools: Tool[];
private agent: any;
private checkpointer: MemorySaver;
public info: AgentInfo;
constructor(model: BaseChatModel, tools: Tool[], info: AgentInfo) {
const checkpointer = new MemorySaver();
this.model = model;
this.tools = tools;
this.checkpointer = checkpointer
this.info = info;
const agent = createReactAgent({
llm: model,
tools: tools,
checkpointSaver: this.checkpointer,
});
this.agent = agent;
}
public getId(){
return this.info.id
}
public async invoke(prompt: string, thread_id?: string) {
const message = await this.agent.invoke(
{
messages: [{ role: "user", content: prompt }],
},
{
configurable: {
thread_id: thread_id,
},
}
);
return message.messages[message.messages.length - 1].content
}
}
const geminiModel = new ChatGoogleGenerativeAI({
model: "gemini-1.5-flash-8b",
});
const agent = new Agent(geminiModel, [], {id: '1', name:'josh', job: 'developer', sex: 'male'});
console.log(await agent.invoke('My name is test', 'test'));
console.log(await agent.invoke('What is my name', 'test'));
Problem: I expect the agent to remember the context between the invocations (i.e., "My name is John" should be retained and used for answering "What is my name?"). However, it seems that memory is not being retained across invocations, and the agent doesn't respond with the expected context. What I've tried: Ensured that MemorySaver is saving and loading memory correctly using checkpointer.load() and checkpointer.save(). Passed the loaded memory to the agent invocation, hoping it would maintain the context.
Upvotes: 0
Views: 53