Stateful graphs with persistent memory
Because graph state shouldn't reset at session end
The Problem
LangGraph is state management done right. Your agent workflows have nodes, edges, and beautiful conditional routing. State flows through your graph like a well-orchestrated symphony.
Within a session.
When the session ends, your graph resets to its initial state. All those carefully accumulated decisions? Gone. The context your agent built up through 15 nodes of processing? Vanished.
You've built a sophisticated state machine that forgets it ever ran.
It's like having a GPS that recalculates from "Starting navigation..." every time you stop for gas.
How Stompy Helps
Stompy adds persistent memory to your LangGraph workflows.
Your graphs gain continuity:
- State snapshots persist: Save important state at key nodes, restore later - Decisions carry forward: What you decided in session 1 informs session 47 - Workflows compound: Each run builds on previous knowledge - Cross-graph memory: Different workflows share the same project context
LangGraph handles the flow. Stompy handles the memory. Your workflows finally have history.
Integration Walkthrough
Install langchain-mcp-adapters
The official LangChain MCP adapters work seamlessly with LangGraph.
pip install langchain-mcp-adapters langgraph
Connect Stompy via SSE transport
Use MultiServerMCPClient with SSE transport and bearer auth.
from langchain_mcp_adapters.client import MultiServerMCPClientfrom langgraph.prebuilt import create_react_agentfrom langchain_anthropic import ChatAnthropicimport os# Connect to Stompy via SSEclient = MultiServerMCPClient({"stompy": {"transport": "sse","url": "https://mcp.stompy.ai/sse","headers": {"Authorization": f"Bearer {os.environ['STOMPY_TOKEN']}"}}})# Get Stompy's memory toolstools = await client.get_tools()# Create agent with persistent memoryagent = create_react_agent(ChatAnthropic(model="claude-sonnet-4-20250514"),tools)
Agent saves workflow decisions with lock_context
At key decision points in your graph, save the context. It persists across sessions.
# Session 1: Workflow processes user requirementsresult = await agent.ainvoke({"messages": [{"role": "user", "content": "Design the notification system"}]})# At the "design_complete" node, agent saves:# lock_context(# topic="notification_system_design",# content="Architecture: Event-driven with Redis pub/sub.# Channels: email, push, in-app. Priority queues# for urgent notifications. Rate limiting per user.",# priority="always_check",# tags="architecture,notifications,design"# )# β Creates v1.0
Future workflows continue from decisions
Later sessions pick up from previous decisions. No re-designing settled architecture.
# Session 12: Implementing what was designedresult = await agent.ainvoke({"messages": [{"role": "user", "content": "Implement the email notification handler"}]})# Agent calls: recall_context("notification_system_design")# Agent: "Implementing email handler for our event-driven# notification system. Using Redis pub/sub as designed,# with rate limiting per user. Here's the handler..."# Architecture decisions persist. Implementation flows.
What You Get
- Automatic session handovers preserve workflow continuity
- Semantic search (embeddings) finds decisions by meaning
- Delta evaluation prevents redundant state snapshots
- Priority tagging ensures critical architecture surfaces
- Version history tracks how architecture evolved
Ready to give LangGraph a memory?
Join the waitlist and be the first to know when Stompy is ready. Your LangGraph projects will never forget again.