đź”—
LangChain+Stompy

Chains that remember every link

Because ConversationBufferMemory has a very short buffer

The Problem

LangChain's memory classes are great—for about 15 minutes.

ConversationBufferMemory? Gone when you restart. ConversationSummaryMemory? Summarized into oblivion. VectorStoreMemory? Better, but you're now managing another database for the privilege of basic recall.

You've built elaborate chains. You've connected retrievers to agents to tools. It's a beautiful Rube Goldberg machine of LLM orchestration.

And it wakes up every morning with no idea who it is.

"Based on our previous conversation—" you start, and your chain stares back blankly. What previous conversation? There is only now. There is only the prompt.

How Stompy Helps

Stompy adds a memory layer that actually persists. Not "persists until your Jupyter kernel dies." Actually persists.

Your LangChain setup gains:

- Cross-session context: Pick up exactly where you left off, days or weeks later - Structured recall: Not just "what did we talk about" but "what decisions did we make" - Priority-aware retrieval: Critical project rules surface automatically when relevant - Project isolation: Different chains for different projects, kept completely separate

It's not a replacement for LangChain's memory classes—it's the persistent layer underneath them. Your chains keep their in-session memory, and Stompy keeps the memory that matters across sessions.

Finally, your chains can have conversations that span longer than a coffee break.

Integration Walkthrough

1

Install langchain-mcp-adapters

LangChain uses the official MCP adapters package to connect to any MCP server.

pip install langchain-mcp-adapters langgraph
2

Connect Stompy via SSE transport

Use SSE transport with bearer auth. The MultiServerMCPClient manages the connection.

from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
import os
client = MultiServerMCPClient({
"stompy": {
"transport": "sse",
"url": "https://mcp.stompy.ai/sse",
"headers": {"Authorization": f"Bearer {os.environ['STOMPY_TOKEN']}"}
}
})
tools = await client.get_tools()
# Create agent with memory tools + instructions
agent = create_react_agent(
ChatOpenAI(model="gpt-4o"),
tools,
prompt="You have persistent memory. Use lock_context to save "
"decisions and recall_context to retrieve them."
)
3

Agent saves decisions with lock_context

When your agent makes architectural decisions, it saves them permanently. Same topic name, versioned content.

# Monday: Your chain makes a decision
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "Let's use event sourcing for orders"}]
})
# Agent decides to save this decision:
# lock_context(topic="order_architecture",
# content="Event sourcing with PostgreSQL event store.
# Projections for read models. Outbox pattern.")
# → Creates v1.0
4

Agent recalls in future sessions

Days or weeks later, your agent retrieves context by topic name. No re-explaining needed.

# Tuesday (or next month): Agent remembers
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "Implement the order aggregate"}]
})
# Agent calls: recall_context("order_architecture") → v1.0
# Agent: "Implementing Order aggregate with event sourcing
# as we designed. Using PostgreSQL event store with
# the outbox pattern for reliability..."

What You Get

  • Automatic session handovers—pick up where you left off
  • Semantic search (vector embeddings) finds context by meaning
  • Delta evaluation prevents storing redundant information
  • Conflict detection catches contradictions before they cause bugs
  • Version history tracks how decisions evolved over time

Ready to give LangChain a memory?

Join the waitlist and be the first to know when Stompy is ready. Your LangChain projects will never forget again.