🔷
Weaviate+Stompy

AI-native search with AI-native memory

AI-native search, AI-native memory

The Problem

Weaviate thinks AI-first. Built for machine learning from the ground up—multimodal vectors, GraphQL API, modules for everything from text2vec to img2vec. It's the vector database that speaks fluent AI.

But AI-native search still lacks AI-native memory.

Your Weaviate instance indexes images, text, and custom objects brilliantly. It finds similar items across modalities. What it can't do is remember that last week you decided to use these search results for a specific purpose, or that the authentication docs it's returning contradict the JWT approach you settled on yesterday.

Every query to Weaviate is a fresh start. The hybrid search is smart. The contextual awareness is zero.

You've built a sophisticated search system that can find anything in your corpus. But "finding similar documents" and "finding what's actually useful for your project" are different problems. Weaviate solves the first one brilliantly. The second one requires a different kind of memory entirely.

AI-native search deserves AI-native project memory.

How Stompy Helps

Stompy gives Weaviate the project context layer that turns great search into useful search.

Your AI-native architecture gains project awareness: - **Multimodal context**: Whether you're searching text, images, or custom vectors, Stompy adds project context that helps interpret results - **Query evolution**: Track how your searches evolve over time, building a map of what your project actually needs - **Cross-modal decisions**: When image search informs text decisions (or vice versa), those connections persist - **Semantic search squared**: Weaviate finds semantically similar documents; Stompy finds contextually relevant project knowledge

The combination is powerful: Weaviate's AI-native search capabilities paired with Stompy's AI-native memory. Your search results come with context about why they matter to your specific situation.

Search that understands your corpus meets memory that understands your project.

Integration Walkthrough

1

Connect Weaviate with project context

Set up Weaviate for document search and Stompy for project memory—two complementary systems.

import weaviate
import httpx
import os
# Weaviate for AI-native document search
weaviate_client = weaviate.Client(
url="http://localhost:8080",
additional_headers={"X-OpenAI-Api-Key": os.environ["OPENAI_API_KEY"]}
)
# Stompy for project memory
async def get_stompy_context(query: str) -> dict:
async with httpx.AsyncClient() as client:
# Semantic search for relevant project context
response = await client.post(
"https://mcp.stompy.ai/sse",
headers={"Authorization": f"Bearer {os.environ['STOMPY_TOKEN']}"},
json={"tool": "context_search", "query": query, "limit": 3}
)
return response.json()
2

Hybrid search with project context

Combine Weaviate's hybrid search (dense + sparse vectors) with Stompy's project knowledge for truly intelligent retrieval.

async def intelligent_search(user_query: str):
# Get relevant project context from Stompy
project_context = await get_stompy_context(user_query)
# Hybrid search in Weaviate (BM25 + vector similarity)
weaviate_results = (
weaviate_client.query
.get("Document", ["title", "content", "category"])
.with_hybrid(query=user_query, alpha=0.5)
.with_limit(5)
.do()
)
documents = weaviate_results["data"]["Get"]["Document"]
# Build context-aware response
prompt = f"""Project Context (from previous work):
{project_context.get('contexts', [])}
Search Results (from documentation):
{documents}
User Question: {user_query}
Provide an answer that considers both our project's specific decisions
and the retrieved documentation:"""
return call_llm(prompt)
3

Track multimodal search patterns

When searching across modalities (text, images, etc.), save insights about what combinations work for your project.

async def save_search_insight(modality: str, query: str, useful_results: list):
"""Track which searches were useful for future reference."""
async with httpx.AsyncClient() as client:
await client.post(
"https://mcp.stompy.ai/sse",
headers={"Authorization": f"Bearer {os.environ['STOMPY_TOKEN']}"},
json={
"tool": "lock_context",
"topic": f"search_patterns_{modality}",
"content": f"""Search Query: {query}
Modality: {modality}
Useful Results: {useful_results}
Timestamp: {datetime.now().isoformat()}
This search pattern was effective for our project.""",
"tags": f"search,{modality},patterns"
}
)
# After finding useful images for UI design
await save_search_insight(
modality="image",
query="dashboard layout dark mode",
useful_results=["result_id_1", "result_id_2"]
)

What You Get

  • AI-native architecture: Both Weaviate and Stompy are built for modern AI workflows
  • Multimodal context: Project memory that works whether you're searching text, images, or custom objects
  • Hybrid search enhanced: Combine BM25 + vectors + project context for triple-layer relevance
  • Query pattern evolution: Track how your information needs change as your project grows
  • GraphQL-friendly: Both systems play well with modern API architectures

Ready to give Weaviate a memory?

Join the waitlist and be the first to know when Stompy is ready. Your Weaviate projects will never forget again.