Vercel AI SDK Integration
Package: @cortexmemory/vercel-ai-provider | Version: 0.27.2 | SDK: v0.21.0+ | Status: Production Ready
Complete integration with Vercel AI SDK for Next.js applications with full memory orchestration capabilities.
Quickstart Demo (Recommended)
The fastest way to get started is with our interactive quickstart demo:
$ cd packages/vercel-ai-provider/quickstart
npm install && npm run dev
$ cortex init my-app --template vercel-ai-quickstart
cd my-app && cortex start
Open http://localhost:3000 to see:
- Real-time Memory Orchestration - Watch data flow through all 7 Cortex layers
- Layer Flow Visualization - Memory Space → User → Agent → Conversation → Vector → Facts → Graph
- Memory Space Switching - Live demonstration of multi-tenant isolation
- Streaming with Progressive Storage - Real-time fact extraction during streaming
- Belief Revision - See facts update when user changes their mind (SDK v0.24.0)
Documentation
- Getting Started - Step-by-step setup tutorial
- API Reference - Complete API documentation
- Advanced Usage - Graph memory, fact extraction, and custom configurations
- Memory Spaces - Multi-tenancy guide
- Hive Mode - Cross-application memory sharing
- Migration from mem0 - Switch from mem0
- Troubleshooting - Common issues and solutions
Overview
The Cortex Memory Provider for Vercel AI SDK enables automatic persistent memory for AI applications built with Next.js and the Vercel AI SDK.
Key Features
Automatic Memory
Retrieves context before responses, stores conversations after
Zero Configuration
Works out of the box with sensible defaults
TypeScript Native
Built for TypeScript from the ground up
Self-Hosted
Deploy Convex anywhere, no API keys or vendor lock-in
Edge Compatible
Works in Vercel Edge Functions, Cloudflare Workers
Memory Spaces
Isolate memory by user, team, or project
Additional Features:
- Hive Mode - Share memory across multiple agents/applications
- ACID Guarantees - Never lose data with Convex transactions
- Semantic Search - Find relevant memories with embeddings
- Fact Extraction - LLM-powered fact extraction for 60-90% storage savings (SDK v0.18.0+)
- Graph Memory - Optional Neo4j/Memgraph integration (SDK v0.19.0+)
- Enhanced Streaming - Progressive storage, real-time hooks, and metrics
- Belief Revision - Intelligent fact updates when information changes (SDK v0.24.0+)
Quick Start
Installation
$ npm install @cortexmemory/vercel-ai-provider @cortexmemory/sdk ai convex
Basic Example
import { createCortexMemory } from "@cortexmemory/vercel-ai-provider";
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
const cortexMemory = createCortexMemory({
convexUrl: process.env.CONVEX_URL!,
memorySpaceId: "my-chatbot",
userId: "user-123",
userName: "User",
// REQUIRED in SDK v0.17.0+
agentId: "my-assistant",
agentName: "My AI Assistant",
// Optional: Enable graph memory (auto-configured via env vars)
enableGraphMemory: process.env.CORTEX_GRAPH_SYNC === "true",
// Optional: Enable fact extraction (auto-configured via env vars)
enableFactExtraction: process.env.CORTEX_FACT_EXTRACTION === "true",
// Optional: Enhanced streaming features
streamingOptions: {
storePartialResponse: true,
progressiveFactExtraction: true,
enableAdaptiveProcessing: true,
},
});
const result = await streamText({
model: cortexMemory(openai("gpt-4o-mini")),
messages: [{ role: "user", content: "What did I tell you earlier?" }],
});
That's it! Memory is automatically orchestrated across all layers.
Since SDK v0.17.0, agentId is required for all user-agent conversations. See Breaking Changes below.
For async initialization (e.g., when Convex client needs to be created asynchronously), use createCortexMemoryAsync instead of createCortexMemory. See the API Reference for details.
Key Capabilities
This integration provides:
- Memory-Augmented Models - Wrap any AI SDK provider with memory
- Full Orchestration - Automatic multi-layer memory storage:
- Conversation storage (ACID-safe)
- Vector embeddings (semantic search)
- Fact extraction (structured knowledge)
- Graph sync (entity relationships)
- Automatic Context Injection - Relevant memories added to prompts
- Manual Control - Search, remember, clear methods available
- Edge Runtime Support - Works in serverless/edge environments
- Real-time Visualization - Layer observer for UI integration
Breaking Changes
agentId Required (v0.17.0+)
Since SDK v0.17.0, all user-agent conversations require an agentId:
// Old way (will throw error)
const cortexMemory = createCortexMemory({
convexUrl: process.env.CONVEX_URL!,
memorySpaceId: "my-chatbot",
userId: "user-123",
});
// New way (v0.17.0+)
const cortexMemory = createCortexMemory({
convexUrl: process.env.CONVEX_URL!,
memorySpaceId: "my-chatbot",
userId: "user-123",
agentId: "my-assistant", // Required!
});
Why? Cortex properly tracks conversation participants for features like agent-to-agent memory sharing and proper attribution.
Advanced Features
Graph Memory (v0.19.0+)
Sync memories to Neo4j or Memgraph for relationship queries:
const cortexMemory = createCortexMemory({
// ... base config
enableGraphMemory: true,
// Auto-configured from env vars:
// NEO4J_URI, NEO4J_USERNAME, NEO4J_PASSWORD
// or MEMGRAPH_URI, MEMGRAPH_USERNAME, MEMGRAPH_PASSWORD
});
Automatic Fact Extraction (v0.18.0+)
LLM-powered extraction of structured facts:
const cortexMemory = createCortexMemory({
// ... base config
enableFactExtraction: true,
// Auto-configured from env vars:
// CORTEX_FACT_EXTRACTION=true
// CORTEX_FACT_EXTRACTION_MODEL=gpt-4o-mini
});
Layer Observation (for Visualization)
Watch data flow through all layers in real-time:
const cortexMemory = createCortexMemory({
// ... base config
layerObserver: {
onLayerUpdate: (event) => {
// event.layer: 'memorySpace' | 'user' | 'agent' | 'conversation' | 'vector' | 'facts' | 'graph'
// event.status: 'pending' | 'in_progress' | 'complete' | 'error'
// event.latencyMs: number
updateVisualization(event);
},
onOrchestrationComplete: (summary) => {
console.log(`Total time: ${summary.totalLatencyMs}ms`);
},
},
});
Enhanced Streaming
Progressive storage and real-time monitoring:
const cortexMemory = createCortexMemory({
// ... base config
streamingOptions: {
storePartialResponse: true,
partialResponseInterval: 3000,
progressiveFactExtraction: true,
progressiveGraphSync: true,
enableAdaptiveProcessing: true,
},
streamingHooks: {
onChunk: (event) => console.log("Chunk:", event.chunk),
onProgress: (event) => console.log("Progress:", event.bytesProcessed),
onComplete: (event) => console.log("Done:", event.durationMs),
},
});
Environment Variables
Configure features via environment variables:
# Required
CONVEX_URL=https://your-project.convex.cloud
OPENAI_API_KEY=sk-...
# Fact Extraction (SDK v0.18.0+)
CORTEX_FACT_EXTRACTION=true
CORTEX_FACT_EXTRACTION_MODEL=gpt-4o-mini
# Graph Memory (SDK v0.19.0+)
CORTEX_GRAPH_SYNC=true
NEO4J_URI=bolt://localhost:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=your-password
Package Source
- Package:
packages/vercel-ai-provider/ - Quickstart Demo:
packages/vercel-ai-provider/quickstart/ - NPM: @cortexmemory/vercel-ai-provider