Graph-based agents with graph-long memory
Graph state that persists beyond sessions
The Problem
LangGraph multi-agent is where agent orchestration gets serious. Complex coordination through graph-based workflows. Agents as nodes, communication as edges. Conditional branching, parallel execution, human-in-the-loop checkpoints. Your research agent feeds the analyst, the analyst feeds the writer, the writer produces the final output. Beautiful orchestration.
But graph state is session state.
Here's the reality: Your carefully orchestrated multi-agent workflow runs perfectly. A research agent gathers information, passes it to an analyst who identifies patterns, who hands off to a writer who produces the report. The graph traverses, agents coordinate, a brilliant result emerges. Then the session ends. The graph resets to its initial state. The analyst's pattern recognition insights? Gone. The writer's learned preferences about report structure? Vanished.
LangGraph gives you powerful primitives—StateGraph, checkpointing, conditional edges. But checkpoints are for durability within a workflow execution, not for learning across executions. Your graph can survive a crash. It can't learn from success.
The problem compounds in production. You're running this graph hundreds of times. Each execution discovers something useful—common research patterns, effective analysis approaches, output formats that resonate. But the next execution starts from scratch. Your agents never get smarter. They just get called more often.
Graph-based coordination needs graph-spanning memory.
How Stompy Helps
Stompy gives your LangGraph multi-agent systems the persistent memory layer they're missing.
Your agent graphs gain true cross-session intelligence: - **State that spans sessions**: Not just durable within a run, but learning across all runs. The insights from execution #42 are available in execution #500. - **Node-specific expertise**: Your research agent accumulates knowledge about effective search strategies. Your analyst builds pattern recognition that compounds. Each node becomes an expert in its domain. - **Inter-agent communication history**: The successful handoff patterns between nodes are remembered. Your graph learns optimal paths through its own topology. - **Collective graph intelligence**: Insights discovered at any node benefit all nodes. The whole graph becomes smarter.
LangGraph provides the structure. Stompy provides the memory. Together: agent graphs that evolve with every execution, building institutional knowledge that makes each run more effective than the last.
Graphs that learn from every traversal.
Integration Walkthrough
Create a memory-enhanced multi-agent graph
Set up LangGraph with shared Stompy memory accessible to all agent nodes.
from langgraph.graph import StateGraph, ENDfrom langchain_core.messages import HumanMessage, AIMessagefrom langchain_openai import ChatOpenAIimport httpximport osfrom typing import TypedDict, Annotatedimport operatorclass AgentState(TypedDict):messages: Annotated[list, operator.add]context: strresearch: stranalysis: strclass StompyMemory:"""Shared memory layer for all graph nodes."""def __init__(self):self.url = "https://mcp.stompy.ai/sse"self.headers = {"Authorization": f"Bearer {os.environ['STOMPY_TOKEN']}"}async def recall(self, topic: str) -> str:async with httpx.AsyncClient() as client:response = await client.post(self.url, headers=self.headers,json={"tool": "recall_context", "topic": topic})return response.json().get("content", "")async def save(self, topic: str, content: str, priority: str = "reference"):async with httpx.AsyncClient() as client:await client.post(self.url, headers=self.headers,json={"tool": "lock_context", "topic": topic,"content": content, "priority": priority})async def search(self, query: str, limit: int = 5) -> list:async with httpx.AsyncClient() as client:response = await client.post(self.url, headers=self.headers,json={"tool": "context_search", "query": query, "limit": limit})return response.json().get("results", [])memory = StompyMemory()
Build memory-aware agent nodes
Create agent nodes that recall past learnings and save new insights.
llm = ChatOpenAI(model="gpt-4o")async def researcher_node(state: AgentState) -> AgentState:"""Research agent with accumulated search expertise."""# Recall effective research strategies from past runsresearch_patterns = await memory.recall("research_strategies")past_findings = await memory.search(state["messages"][-1].content, limit=3)prompt = f"""You are a research specialist with access to past learnings:EFFECTIVE STRATEGIES:{research_patterns}RELEVANT PAST FINDINGS:{[f['content'][:500] for f in past_findings]}Research the following: {state["messages"][-1].content}"""response = await llm.ainvoke([HumanMessage(content=prompt)])# Save this research for future recallawait memory.save(topic=f"research_{hash(state['messages'][-1].content) % 10000}",content=response.content[:2000])return {"research": response.content, "messages": [AIMessage(content="Research complete")]}async def analyst_node(state: AgentState) -> AgentState:"""Analyst with pattern recognition that compounds over time."""analysis_patterns = await memory.recall("analysis_frameworks")prompt = f"""Analyze this research using proven frameworks:ANALYSIS FRAMEWORKS:{analysis_patterns}RESEARCH TO ANALYZE:{state["research"]}"""response = await llm.ainvoke([HumanMessage(content=prompt)])# If we discovered a new pattern, save itif "pattern:" in response.content.lower():await memory.save("discovered_patterns", response.content[:1000])return {"analysis": response.content, "messages": [AIMessage(content="Analysis complete")]}
Assemble the graph with persistent learning
Connect nodes and track graph execution patterns for optimization.
async def writer_node(state: AgentState) -> AgentState:"""Writer that learns preferred output formats."""writing_preferences = await memory.recall("writing_style")prompt = f"""Create the final output based on analysis.STYLE GUIDELINES:{writing_preferences}ANALYSIS:{state["analysis"]}"""response = await llm.ainvoke([HumanMessage(content=prompt)])return {"messages": [AIMessage(content=response.content)]}# Build the graphgraph = StateGraph(AgentState)graph.add_node("researcher", researcher_node)graph.add_node("analyst", analyst_node)graph.add_node("writer", writer_node)graph.add_edge("researcher", "analyst")graph.add_edge("analyst", "writer")graph.add_edge("writer", END)graph.set_entry_point("researcher")app = graph.compile()# Track execution patterns for graph optimizationasync def run_with_tracking(query: str):result = await app.ainvoke({"messages": [HumanMessage(content=query)], "context": ""})# Log the successful execution path for future optimizationawait memory.save(topic="graph_execution_log",content=f"Query: {query[:100]}\nNodes: researcher->analyst->writer\nSuccess: True",priority="reference")return result
What You Get
- Cross-session graph learning: Each execution makes the entire graph smarter, with insights persisting across all future runs
- Node specialization: Individual agents become domain experts over time, accumulating knowledge specific to their role in the workflow
- Path optimization: The graph learns which execution paths work best for different query types, enabling dynamic optimization
- Semantic context retrieval: Any node can search for relevant past insights using natural language queries
- Version-tracked evolution: See how your graph's collective knowledge has grown over time with full version history
Ready to give LangGraph Multi-Agent a memory?
Join the waitlist and be the first to know when Stompy is ready. Your LangGraph Multi-Agent projects will never forget again.