Introduction to Cortex
Welcome to Cortex
Cortex is a plug'n'play persistent memory system for AI agents, powered by Convex. It brings enterprise-grade memory capabilities to any AI application, allowing your agents to remember, learn, and build context over time.
The Problem
Building AI agents with persistent memory is hard:
What Developers Really Need
AI agents need memory that is:
- Flexible - Remember anything without predefined schemas
- Persistent - Never forget, survive restarts and deployments
- Isolated - Each agent has private, secure storage
- Searchable - Find relevant memories using semantic search
- Fast - Sub-second retrieval even with millions of memories
- Scalable - Grow from prototype to millions of users
- Simple - Plug in and start using in minutes, not weeks
- Framework-agnostic - Work with any LLM or AI framework
The Cortex Solution
Cortex solves all these problems with a single, unified system. Get started in 2 commands:
Initialize your project
$ cortex init my-agentStart development
$ cortex devThe CLI handles everything:
- Zero configuration - Works out of the box
- Project scaffolding - Templates and examples included
- Multi-deployment - Local, staging, production
- Database operations - Backup, restore, statistics
- Interactive development - Live dashboard like Expo
For programmatic access, the SDK is just as simple:
import { Cortex, createAuthContext } from "@cortexmemory/sdk";
// Works with ANY auth system (Auth0, Clerk, Okta, WorkOS, custom JWT, etc.)
const cortex = new Cortex({
convexUrl: process.env.CONVEX_URL,
auth: createAuthContext({
userId: yourUser.id, // From your existing auth
tenantId: yourUser.tenantId, // Optional (for multi-tenant SaaS)
}),
});
// Store a conversation
await cortex.memory.remember({
memorySpaceId: "user-1-personal",
conversationId: "conv-123",
userMessage: "I prefer dark mode",
agentResponse: "I'll remember that!",
userName: "User",
// userId auto-injected from auth
});
// Search works immediately
const memories = await cortex.memory.search(
"user-1-personal",
"what are the user preferences?",
);
Cortex is framework-agnostic - it works with whatever auth system you already use. Just extract userId and optionally tenantId, no complex integration needed. See Authentication.
Key Capabilities
Flexible Memory
Remember ANY information without hardcoded topics or schemas. No predefined categories - agents learn naturally.
Semantic Search
AI-powered understanding, not just keyword matching. Multi-strategy retrieval with intelligent fallbacks.
Memory Space Isolation
Complete isolation per memory space with Hive and Collaboration modes.
Infinite Context
Recall from millions of past messages with up to 99% token reduction.
Context Chains
Hierarchical context sharing for multi-agent systems.
Graph Architecture
Built-in graph traversals and optional graph DB integration.
Hive Mode
Multiple AI tools share one memory space for MCP integration.
MCP Server
Cross-application memory for Cursor, Claude Desktop, and custom tools. (Planned)
See also: User Profiles, Fact Extraction, Analytics, Streaming, and Resilience Layer.
Why Cortex?
Built on Convex
Cortex leverages Convex, a reactive backend platform that provides:
- ACID Transactions - Your data is always consistent
- Real-time Updates - Changes propagate instantly
- Vector Search - Built-in support for embeddings
- Serverless - Scales automatically, pay per use
- Type-safe - Full TypeScript support
- Developer Experience - Hot reload, time travel debugging
Flexible Deployment: Cortex works with Convex however you run it:
- Convex Cloud (recommended) - Fully managed, no ops required
- Local Development -
npx convex devfor fast iteration - Self-Hosted - Deploy Convex to your own infrastructure
Embedding-Agnostic
Unlike vector databases that lock you into their ecosystem, Cortex is embedding-agnostic:
const embedding = await openai.embeddings.create({
model: "text-embedding-3-large",
input: text,
});
const embedding = await cohere.embed({
texts: [text],
model: "embed-english-v3.0",
});
const embedding = await localModel.encode(text);
Store embeddings from any provider:
// Cortex doesn't care - just store it (Layer 2 for system memories)
await cortex.vector.store(memorySpaceId, {
content: text,
contentType: "raw",
embedding: embedding.data[0].embedding,
source: { type: "system", timestamp: new Date() },
metadata: { importance: 50 },
});
Framework-Agnostic
Works with any AI framework:
- LangChain - Drop-in memory replacement
- Vercel AI SDK - Middleware for automatic memory
- LlamaIndex - Compatible storage backend
- OpenAI Assistants - Vector store integration
- Custom - Use the core API directly
Open Source
Cortex is licensed under FSL-1.1-Apache-2.0 (Functional Source License):
- Free for internal use and applications that use Cortex
- Modify and distribute freely (non-competing uses)
- Explicit patent grant protection
- No vendor lock-in
- Community-driven development
- Each version becomes Apache 2.0 after 2 years
Design Principles
Cortex is built on these core principles:
- Developer Experience First - Intuitive APIs, full TypeScript support, clear docs, helpful errors
- Progressive Enhancement - Start simple, add complexity when needed
- Embedding-Agnostic - You choose your embedding model; Cortex just stores and retrieves
- Data Ownership - Your data stays in your Convex account, no external processing
- Production-Ready - ACID transactions, automatic scaling, security best practices
Architecture Overview
Direct Mode (Open Source)
Your code integrates with the Cortex SDK
Memory Operations • Context Chains • Agent Management • User Profiles
Vector Store • Document Store • Real-time Sync — You manage, you pay Convex directly
Cloud Mode (Managed Service)
Your code uses the same Cortex SDK API
Identical interface — switch modes with config only
Analytics Dashboard • Cost Optimization • Team Collaboration • Migration Tools • Priority Support
Your data stays in your account (using your credentials)
In both modes, your data lives in your Convex instance. Cortex Cloud adds management tools without moving your data.
Use Cases
Cortex powers a wide range of AI applications:
Chatbots & Assistants
Remember user preferences, conversation history, and context across sessions.
Multi-Agent Systems
Coordinate between specialized agents with hierarchical context sharing.
Code Assistants
Remember project structure, coding preferences, and past solutions.
RAG Pipelines
Store and retrieve relevant context for LLM prompts.
Knowledge Management
Organizational memory that grows over time.
Cortex is not an LLM, vector database, or Convex replacement. It's a complete memory system built on top of Convex that works with any AI framework. Your data stays in your Convex instance.