Vercel AI SDK Integration
Package:
@cortexmemory/vercel-ai-provider
Version: 0.1.0
Status: Production Ready ✅
Complete integration with Vercel AI SDK for Next.js applications.
Quick Links
- Getting Started - Step-by-step setup tutorial
- API Reference - Complete API documentation
- Advanced Usage - Custom configurations and patterns
- Memory Spaces - Multi-tenancy guide
- Hive Mode - Cross-application memory sharing
- Migration from mem0 - Switch from mem0
- Troubleshooting - Common issues and solutions
Overview
The Cortex Memory Provider for Vercel AI SDK enables automatic persistent memory for AI applications built with Next.js and the Vercel AI SDK.
Key Features
- 🧠 Automatic Memory - Search and store without manual steps
- 🚀 Edge Compatible - Works in Vercel Edge Functions
- 📦 TypeScript Native - Built for TypeScript from the ground up
- 🔒 Self-Hosted - Deploy Convex anywhere, no vendor lock-in
- 🎯 Memory Spaces - Multi-tenant isolation built-in
- 🐝 Hive Mode - Share memory across applications
- ⚡ Streaming Support - Native streaming with Cortex SDK v0.9.0+
Quick Start
npm install @cortexmemory/vercel-ai-provider @cortexmemory/sdk ai convex
import { createCortexMemory } from '@cortexmemory/vercel-ai-provider';
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
const cortexMemory = createCortexMemory({
convexUrl: process.env.CONVEX_URL!,
memorySpaceId: 'my-chatbot',
userId: 'user-123',
});
const result = await streamText({
model: cortexMemory(openai('gpt-4-turbo')),
messages: [{ role: 'user', content: 'What did I tell you earlier?' }],
});
That's it! Memory is automatically searched and stored.
What Gets Built
This integration provides:
- Memory-Augmented Models - Wrap any AI SDK provider with memory
- Automatic Context Injection - Relevant memories added to prompts
- Automatic Storage - Conversations stored after responses
- Manual Control - Search, remember, clear methods available
- Edge Runtime Support - Works in serverless/edge environments
Examples
See examples directory for:
- next-chat - Basic chat with persistent memory
- next-rag - RAG pattern combining documents + conversation memory
- next-multimodal - Multi-modal chat with memory (placeholder)
- hive-mode - Cross-application memory sharing (placeholder)
- memory-spaces - Multi-tenant SaaS (placeholder)
Package Source
Package source code: packages/vercel-ai-provider/
See Also
- [Cortex SDK Documentation](/ README) - Main Cortex documentation
- [Streaming Support](/ core-features/streaming-support) - SDK streaming features
- [Memory Spaces](/ core-features/memory-spaces) - Core memory space features
- [Hive Mode](/ core-features/hive-mode) - Core hive mode features