LangChain Memory: Building Stateful LLM Applications
Master conversational memory in LangChain. Learn how to use BufferMemory, SummaryMemory, and EntityMemory to retain context in LLM apps.

Learn how to use LangChain Memory to retain and recall information across interactions β a vital feature for chatbots, virtual agents, and intelligent assistants.
β¨ Why Memory Matters in LLM Apps
Large Language Models (LLMs) are stateless by default. Without memory, your assistant forgets everything after each prompt. LangChain solves this with powerful memory modules:
- π£οΈ Multi-turn conversations
- π§ Personalized interactions
- π Efficient context retention
π§© Types of Memory in LangChain
Memory Type | Purpose |
---|---|
BufferMemory | Stores full conversation history as-is |
ConversationSummaryMemory | Summarizes past messages using an LLM for brevity |
EntityMemory | Tracks entities like names, topics, objects |
VectorStoreRetrieverMemory | Retrieves past data semantically using vector similarity |
Each memory type fits different needs β from long-term factual recall to short-term conversational continuity.
π οΈ Hands-On Example: BufferMemory
import { ConversationChain } from "langchain/chains";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { BufferMemory } from "langchain/memory";
const memory = new BufferMemory();
const chain = new ConversationChain({ llm: new ChatOpenAI(), memory });
await chain.call({ input: "Hi, I'm Bhanu" });
await chain.call({ input: "Whatβs my name?" });
Output: βYour name is Bhanu.β
The assistant remembers context across turns using memory.
When to Use Each Memory
- Use BufferMemory for raw dialogue retention
- Use ConversationSummaryMemory when saving tokens matters
- Use EntityMemory to build topic-aware agents
- Use VectorStoreRetrieverMemory in document Q&A bots
Memory can be attached to Chains, Agents, or even Tools.
Related Tutorials
π Hands-On: LangChain Prompt Templates