Cross-Agent Memory
One Project. Every AI.
Zero Re-Explaining.
Claude forgets. Gemini forgets. Codex forgets.
Stompy's starting to think this is a pattern.
MCP for native integration. CLI for your terminal. REST API for everything else. Three paths, one brain.
Your AIs don't talk to each other
ChatGPT remembers ChatGPT conversations. Claude remembers Claude conversations. Stompy remembers your project — regardless of which AI you're talking to.
Morning: Claude
You spend 45 minutes explaining your project architecture to Claude Code. It designs a beautiful API schema.
Afternoon: Gemini
You switch to Gemini CLI for a different task. You spend 45 minutes explaining the same architecture. Again.
Evening: Codex
Codex CLI needs to implement the API. You know the drill. 45 more minutes of context-setting. Same project. Third explanation.
You're not switching AIs. You're re-onboarding them. Every. Single. Time.
Three paths, one brain
Every path — MCP, CLI, or REST API — reads and writes to the same memory store. Lock context via MCP, recall it via CLI. Store via API, search via MCP. It all just works.
One store. Every AI. Every session.
A day in the cross-agent workflow
Three AIs. One project. Every session builds on the last.
Claude designs the architecture and saves it to Stompy.
# Claude Code session — design the notification systemclaude "design the notification system for our app"# Claude saves its work via MCP:# → lock_context(topic="notification_arch",# content="Event-driven architecture with:# - Redis pub/sub for real-time events# - PostgreSQL for notification history# - WebSocket delivery to connected clients# - Email fallback via Resend API")# → lock_context(topic="notification_api",# content="Endpoints:# POST /api/v1/notifications — create notification# GET /api/v1/notifications — list for user# PUT /api/v1/notifications/:id/read — mark read")
Works with your stack
Every major AI coding tool connects to Stompy through at least one path.
REST API endpoints available at api.stompy.ai/api/v1