Optimized prompts meet persistent memory
Because optimized prompts deserve optimized memory
The Problem
DSPy is revolutionary. Write signatures, not prompts. Let the framework optimize everything. Your agents learn the best way to call tools, structure outputs, and chain reasoning.
But here's the thing about optimization: it's only as good as what you remember.
Your DSPy agent spends cycles optimizing how to interact with your codebase. It learns your patterns. It figures out the best chain-of-thought for your domain. Beautiful.
Then you restart. Optimization? Gone. Learned patterns? Evaporated. Your agent is back to square one, ready to re-learn everything it already knew.
It's like having a student who aces every test—but gets amnesia before the next class.
How Stompy Helps
Stompy gives DSPy agents memory that persists across optimization cycles.
Your agents don't just optimize—they accumulate:
- Project knowledge persists: Architecture decisions, code patterns, domain rules - Optimizations compound: What worked yesterday informs today's prompts - Context stays relevant: No re-explaining your codebase every session - Learning accelerates: Start each session from where you left off
DSPy handles the "how to prompt." Stompy handles the "what to remember." Together, they're unstoppable.
Integration Walkthrough
Connect Stompy via SSE transport
DSPy has native MCP support. Use MCPClient with SSE transport and bearer auth.
import dspyfrom dspy.clients.mcp import MCPClientimport os# Connect to Stompy via SSEstompy = MCPClient(transport="sse",url="https://mcp.stompy.ai/sse",headers={"Authorization": f"Bearer {os.environ['STOMPY_TOKEN']}"})# List available memory toolstools = stompy.tools()# → ['lock_context', 'recall_context', 'context_search', ...]
Create a ReAct agent with memory tools
Use dspy.ReAct with a signature that instructs the agent to use memory.
lm = dspy.LM("anthropic/claude-sonnet-4-20250514")dspy.configure(lm=lm)# Signature tells agent how to use memoryclass MemoryAgent(dspy.Signature):"""Use lock_context to save important info.Use recall_context to retrieve past knowledge."""task: str = dspy.InputField()result: str = dspy.OutputField()agent = dspy.ReAct(MemoryAgent, tools=stompy.tools())result = agent(task="Help me refactor the auth module")
Agent saves optimized patterns with lock_context
When your agent discovers effective patterns, it saves them. These persist across sessions and inform future optimizations.
# Session 1: Agent learns your testing patternsresult = agent(task="Write tests for the payment service")# Agent saves what it learned:# → lock_context(topic="testing_patterns",# content="Project uses pytest with fixtures in conftest.py.# Async tests with pytest-asyncio. Mock external APIs# with responses library. Coverage threshold: 80%.",# priority="important",# tags="testing,patterns,pytest"# )# → Creates v1.0
Future sessions start informed
In future sessions, your agent recalls learned patterns before optimizing. No cold starts.
# Session 15: Agent remembers everythingresult = agent(task="Add tests for the new shipping module")# Agent calls: recall_context("testing_patterns") → v1.0# Agent: "Writing pytest tests for shipping module using your# established patterns: async fixtures, responses for API mocks,# targeting 80% coverage. Here's the implementation..."# No re-learning. No re-explaining. Just building.
What You Get
- Automatic session handovers carry optimizations forward
- Semantic search (embeddings) finds patterns by meaning
- Delta evaluation ensures only novel learnings get stored
- Priority system (always_check) for critical project rules
- Version history tracks how optimizations evolved
Ready to give DSPy a memory?
Join the waitlist and be the first to know when Stompy is ready. Your DSPy projects will never forget again.