What's New
Track the latest features, improvements, and fixes across the entire Cortex ecosystem.
This is the complete changelog for all Cortex packages. Individual package CHANGELOG.md files have been consolidated here for easier tracking.
January 2026
All Packages v0.33.0 · Jan 20, 2026
Artifacts API - Interactive Document Management
📦 New Artifacts API for managing interactive, versioned documents
🔄 Full streaming support with pause/resume capabilities
⏪ Version history with undo/redo functionality
🐍 Full Python SDK parity
Show full details
New Features
Artifacts API
A complete system for managing interactive, versioned digital documents that AI agents can create, modify, and share:
- Core CRUD Operations: Create, read, update, delete artifacts with full type safety
- Version History: Track changes with
undo(),redo(),getHistory(), andgetVersion() - Streaming Support: Real-time content streaming with state management (draft → streaming → paused → final)
- Multi-tenancy: Built-in tenant isolation for SaaS applications
- File Storage: Support for binary artifacts with Convex storage integration
TypeScript SDK
// Create an artifact
const artifact = await cortex.artifacts.create({
memorySpaceId: 'user-123-space',
title: 'Code Snippet',
content: 'function hello() { return "world"; }',
kind: 'code',
tags: ['javascript'],
});
// Version control
await cortex.artifacts.undo(artifact.artifactId);
const history = await cortex.artifacts.getHistory(artifact.artifactId);
Python SDK
# Create an artifact
artifact = await cortex.artifacts.create(
CreateArtifactOptions(
memory_space_id="user-123-space",
title="Meeting Notes",
content="# Standup Notes\n\n- Discussed roadmap...",
kind="text",
)
)
# Streaming
await cortex.artifacts.start_streaming(
StartStreamingParams(artifact_id=artifact.artifact_id)
)
Vercel AI SDK Integration
New AI tools for artifact management within chat interfaces:
createArtifact- Create artifacts via AIupdateArtifact- Modify existing artifactsappendToArtifact- Stream content to artifacts- React hooks:
useArtifacts()anduseArtifact()
Changes
- Updated
@ai-sdk/openaito ^3.0.13 - Updated
@ai-sdk/anthropicto ^3.0.16 - Updated
@types/reactto ^19.2.9 - Updated
framer-motionto ^12.27.5
Vercel AI Provider v0.32.0 · Jan 20, 2026
Reasoning Panel Nativity - Memory Layer Visualization
🧠 New createLayerStreamObserver() helper reduces API route boilerplate from ~40 lines to ~5 lines
⚛️ New useLayerTracking React hook for client-side layer state management
📦 New /react subpath export for React-specific utilities
Show full details
New Features
Server-Side: createLayerStreamObserver()
A helper function that creates an OrchestrationObserver pre-wired to emit layer events to a Vercel AI SDK stream writer:
import { createLayerStreamObserver, createCortexMemoryAsync } from '@cortexmemory/vercel-ai-provider';
import { createUIMessageStream, createUIMessageStreamResponse } from 'ai';
export async function POST(req: Request) {
const { observer, emitTo } = createLayerStreamObserver();
return createUIMessageStreamResponse({
stream: createUIMessageStream({
execute: async ({ writer }) => {
emitTo(writer);
const cortexMemory = await createCortexMemoryAsync({
layerObserver: observer,
// ...config
});
// Stream response...
},
}),
});
}
Client-Side: useLayerTracking Hook
A React hook that manages layer tracking state and provides a handleDataPart callback for useChat:
import { useLayerTracking } from '@cortexmemory/vercel-ai-provider/react';
import { useChat } from '@ai-sdk/react';
function ChatComponent() {
const { layers, isOrchestrating, handleDataPart } = useLayerTracking();
const { messages, sendMessage } = useChat({
onData: handleDataPart,
});
return (
<>
{isOrchestrating && <MemoryLoadingIndicator />}
<LayerVisualization layers={layers} />
<Messages messages={messages} />
</>
);
}
Event Types
Three transient stream events are emitted during memory orchestration:
data-orchestration-start- When orchestration beginsdata-layer-update- For each layer status change (pending → in_progress → complete)data-orchestration-complete- When all layers finish
Reduction in Boilerplate
| Component | Before | After |
|---|---|---|
| API Route layer observer setup | ~40 lines | ~5 lines |
| Client-side data parsing | ~30 lines | 1 line (handleDataPart) |
| Layer state management | ~100 lines | 0 (included in hook) |
| Total | ~170 lines | ~6 lines |
Quickstart Template Updated
The Vercel AI Quickstart template now uses these new helpers out of the box:
app/api/chat-v6/route.tsusescreateLayerStreamObserver()lib/layer-tracking.tsre-exports from@cortexmemory/vercel-ai-provider/react
Run cortex update --sync-template to get the updated quickstart files.
CLI v0.32.0 · Jan 20, 2026
Auto-Discovery of Unregistered Apps
🔍 cortex update now auto-discovers apps in deployment directories
📝 Prompts to register discovered apps for template sync
🚀 Apps created before the registration system are now detected
Show full details
New Features
Auto-Discovery
The update command now scans deployment directories for Cortex apps that aren't registered in ~/.cortexrc. This helps users who created apps before the .cortex registration system was introduced.
Detected patterns:
quickstart/folders with Next.js +@cortexmemory/vercel-ai-provider- Any subdirectory with
@cortexmemory/sdkdependencies
$ cortex update --sync-template
Found 1 unregistered app(s) in deployment directories:
• my-deployment-quickstart (vercel-ai-quickstart) at /path/to/quickstart
? Register these apps for template sync? › (Y/n)
✓ Registered my-deployment-quickstart (vercel-ai-quickstart)
Once registered, apps receive template updates via --sync-template.
All Packages v0.31.1 · Jan 20, 2026
Dependency Updates
📦 Updated all dependencies to latest versions across the monorepo
🔄 TypeScript SDK, CLI, Vercel AI Provider, and Python SDK
Show full details
Changes
npm packages updated:
@types/node^25.0.7 → ^25.0.9@typescript-eslint/eslint-plugin^8.53.0 → ^8.53.1@typescript-eslint/parser^8.53.0 → ^8.53.1convex^1.31.4 → ^1.31.5@ai-sdk/openai^3.0.9 → ^3.0.12@ai-sdk/anthropic^3.0.12 → ^3.0.15@ai-sdk/react^3.0.32 → ^3.0.44ai^6.0.30 → ^6.0.42@hono/node-server^1.19.8 → ^1.19.9framer-motion^12.26.1 → ^12.27.1next^16.1.1 → ^16.1.4
Version bumps:
@cortexmemory/sdk0.31.0 → 0.31.1@cortexmemory/cli0.31.0 → 0.31.1@cortexmemory/vercel-ai-provider0.29.0 → 0.29.1cortex-memory(Python) 0.31.0 → 0.31.1
CLI v0.31.0 · Jan 15, 2026
Adaptive Batch Sizing for cortex db clear
🔄 Automatic batch size reduction on Convex 16MB read limit errors
📉 Starts at 10,000 records, reduces to 500 → 250 → 50 on failures
🧹 Suppresses Convex SDK error noise during retry operations
✅ Reliably clears tables with large embeddings (memories, facts)
Show full details
Bug Fix
cortex db clear fails on tables with embeddings
The cortex db clear command would fail with "Too many bytes read in a single function execution (limit: 16777216 bytes)" when clearing tables containing records with vector embeddings (memories and facts tables).
Root Cause: Each memory/fact record can contain a 1536-dimension float64 embedding (~12KB per record). With a batch size of 1000 records, this alone approaches the 16MB Convex read limit before accounting for content, metadata, and previousVersions.
Solution: Implemented adaptive batch sizing that:
- Starts optimistically with 10,000 records per batch
- On Convex "Server Error" (wraps byte limit error), automatically reduces batch size
- Retry sequence: 10,000 → 500 → 250 → 50
- Continues clearing with smaller batches until complete
- Suppresses Convex SDK console.error output during retries for clean CLI output
// New batch size sequence
const BATCH_SIZE_SEQUENCE = [10000, 500, 250, 50] as const;
The fix is resilient to varying record sizes and automatically adapts without requiring manual configuration.
TypeScript SDK v0.31.0 · Jan 14, 2026
Configurable Recall Limits - Prevent 16MB Read Errors
⚙️ New RecallLimits interface for granular control over recall operations
🔧 Environment variable configuration with per-call overrides
📊 Per-source limits for memories, facts, and graph traversal
🛡️ Eliminates Convex "Too many bytes read" errors in large memory spaces
Show full details
New Features
Configurable Recall Limits
The recall() API now accepts granular limits to control data retrieval across all sources:
const result = await cortex.memory.recall({
memorySpaceId: "user-123-space",
query: "user preferences",
limits: {
memories: 20, // Max vector memories to fetch
facts: 15, // Max facts to fetch
graphHops: 2, // Max graph traversal depth
graphEntitiesPerHop: 5, // Entities to expand per hop
graphResultsPerEntity: 3, // Results per entity from graph
total: 30, // Final result cap after merge/rank
},
});
Configuration Hierarchy
Limits can be configured at three levels (highest priority first):
- Per-call overrides -
RecallParams.limits - Environment variables -
CORTEX_RECALL_* - SDK defaults - Hardcoded sensible defaults
New Environment Variables
| Variable | Default | Description |
|---|---|---|
CORTEX_RECALL_LIMIT_MEMORIES | 20 | Max vector memories per recall |
CORTEX_RECALL_LIMIT_FACTS | 15 | Max facts per recall |
CORTEX_RECALL_GRAPH_HOPS | 2 | Graph traversal depth |
CORTEX_RECALL_GRAPH_ENTITIES_PER_HOP | 5 | Entities expanded per hop |
CORTEX_RECALL_GRAPH_RESULTS_PER_ENTITY | 3 | Results fetched per entity |
CORTEX_RECALL_LIMIT_TOTAL | 30 | Final result cap |
Backward Compatibility
The legacy limit parameter is still supported as an alias for limits.total:
// Legacy (still works)
await cortex.memory.recall({ ..., limit: 25 });
// New (recommended)
await cortex.memory.recall({ ..., limits: { total: 25 } });
Technical Details
New Types in src/types/index.ts:
export interface RecallLimits {
memories?: number;
facts?: number;
graphHops?: number;
graphEntitiesPerHop?: number;
graphResultsPerEntity?: number;
total?: number;
}
export interface RecallParams {
// ... existing fields ...
limits?: RecallLimits;
limit?: number; // Legacy alias for limits.total
}
New Config Module src/config.ts:
RECALL_DEFAULTS- Centralized defaults with env var supportresolveRecallLimits()- Merges per-call, env, and defaults
Bug Fix:
Removed dangerous .collect() fallback in Convex queries that could fetch unlimited data. All queries now properly respect limits.
TypeScript SDK v0.30.1 · Jan 14, 2026
Enriched Entity Extraction & Bidirectional Graph Traceability
🏷️ LLM extraction now returns typed entities (person, organization, place, product, concept)
🔗 New EXTRACTED_WITH edge links Facts to source Memory for full bidirectional traversal
📊 Entity relations (subject-predicate-object triples) sync to graph as typed edges
🔋 Batteries-included: all entity/relation enrichment automatic when graph sync is enabled
Show full details
New Features
Enriched Entity Extraction
The LLM fact extraction prompt now extracts named entities with semantic types:
// Example extracted fact with enriched entities
{
fact: "Sarah works at Acme Corp in San Francisco",
factType: "knowledge",
confidence: 0.95,
entities: [
{ name: "Sarah", type: "person" },
{ name: "Acme Corp", type: "organization" },
{ name: "San Francisco", type: "place" }
],
relations: [
{ subject: "Sarah", predicate: "works_at", object: "Acme Corp" },
{ subject: "Acme Corp", predicate: "located_in", object: "San Francisco" }
]
}
Bidirectional Fact-Memory Traceability
New EXTRACTED_WITH edge enables traversing from Facts back to their source Memory:
Memory ──[REFERENCES]──► Conversation
▲
│ EXTRACTED_WITH (NEW)
│
Fact ──[EXTRACTED_FROM]──► Conversation
│
└──[MENTIONS]──► Entity ──[works_at]──► Entity
This enables queries like "find all facts extracted from memories about topic X".
Technical Details
New Types in src/llm/index.ts:
ExtractedEntity- Entity with name, type, and optional fullValueExtractedRelation- Subject-predicate-object triple for graph edgesExtractedFact.entities- Array of extracted entitiesExtractedFact.relations- Array of relation triples
Graph Sync Changes:
syncFactRelationships()now createsEXTRACTED_WITHedge to Memory node- Entity nodes created with semantic
typeproperty - Relation predicates become typed graph edges (e.g.,
WORKS_AT,LOCATED_IN)
API Changes:
ConflictCandidateinterface updated to include entities/relationsStoreFactParamsalready supports entities/relations (no change needed)- No breaking changes to public API
TypeScript SDK v0.30.0 · Jan 13, 2026
Semantic Search for Facts + True Batteries-Included Embeddings
🔍 Native vector/embedding search directly on the facts table
🔋 NEW: Configure embedding once at SDK init - auto-generates everywhere!
🧠 Automatic embedding generation during remember() and recall()
🧪 New extreme multi-turn conversation stress tests
Show full details
New Features
Batteries-Included Embedding Configuration (v0.30.0+)
Configure embedding provider once - SDK auto-generates embeddings for recall() queries and remember() facts:
// Configure once at SDK init
const cortex = new Cortex({
convexUrl: process.env.CONVEX_URL!,
embedding: {
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY,
},
});
// recall() auto-generates embeddings - no manual code!
const result = await cortex.memory.recall({
memorySpaceId: "user-space",
query: "What colors does the user like?",
// Embedding is auto-generated from query!
});
// remember() auto-generates embeddings for facts - no manual code!
await cortex.memory.remember({
memorySpaceId: "user-space",
userMessage: "My favorite color is purple",
agentResponse: "Got it!",
// Facts are auto-embedded!
});
Or use environment variables for zero-config:
export CORTEX_EMBEDDING=true
export OPENAI_API_KEY=sk-...
Semantic Fact Search
Facts now support native vector search via embeddings:
// Direct: Use semanticSearch() for facts
const facts = await cortex.facts.semanticSearch(memorySpaceId, embedding, {
minConfidence: 80,
limit: 20,
});
Extreme Stress Tests
New comprehensive stress test suite (tests/stress/multi-turn-chaos.test.ts):
- Forgetful User: 50+ repeated questions testing deduplication
- Indecisive User: 30+ preference changes testing supersession chains
- Topic Flooder: 100+ similar memories testing semantic search precision
- Combined Chaos: 100+ turn ultimate stress test
- Parallel Chaos: 5 concurrent users testing isolation
Technical Details
Schema Changes
- Added
embeddingfield to facts table (optional,float64[]) - Added
by_embeddingvector index with 1536 dimensions (OpenAI-compatible)
API Changes
StoreFactParams.embedding- optional embedding for fact storageUpdateFactInput.embedding- optional embedding for fact updatesfacts.semanticSearch()- new method for vector-based fact retrievalrecall()- automatically uses semantic fact search when embedding available
Integration
- Vercel AI provider automatically benefits from semantic fact search
- No changes required to quickstart template
TypeScript SDK v0.29.1 · Jan 10, 2026
Belief Revision Heuristic Improvements
🧠 Improved decision accuracy when LLM is not configured
🎯 Same subject + same object now correctly returns UPDATE/NONE
🔄 Same subject + different object + related predicate returns SUPERSEDE
➕ Different predicate classes correctly return ADD (no false supersessions)
Show full details
Fixed
Belief Revision Default Heuristics
When no LLM is configured, the getDefaultDecision() heuristic now correctly handles all 4 decision types:
| Scenario | Decision | Description |
|---|---|---|
| Empty memory space | ADD | No conflicts to check |
| Different subject + low similarity | ADD | Unrelated facts |
| Different predicate class | ADD | Independent facts (e.g., "favorite color" vs "favorite food") |
| High similarity + higher confidence | UPDATE | Refine existing fact |
| Same subject + same object + higher confidence | UPDATE | Confirm/strengthen existing |
| Same subject + same object + same/lower confidence | NONE | Already captured |
| Exact duplicate | NONE | Skip redundant storage |
| Same subject + different object + related predicate | SUPERSEDE | Preference changed (e.g., blue → purple) |
Key Improvements:
-
Predicate similarity check - Facts with different predicate classes (e.g., "favorite color" vs "favorite food") no longer incorrectly supersede each other, even when they share the same subject.
-
Object comparison for same-subject facts - When subject matches:
- Same object → UPDATE (higher confidence) or NONE (same/lower confidence)
- Different object + related predicate → SUPERSEDE
-
Comprehensive test coverage - 27 integration tests now verify all decision paths explicitly.
Technical Details
- Added
arePredicatesRelated()helper function to check predicate similarity - Enhanced
getDefaultDecision()to consider subject, object, AND predicate relationships - New test file:
tests/facts-revision-decisions.test.tswith exhaustive edge case coverage
TypeScript SDK v0.29.0 · Jan 9, 2026
Automatic Fact and Entity Graph Sync
🔗 Graph sync now automatic when CORTEX_GRAPH_SYNC=true
📊 Facts extracted via remember() automatically sync to graph
🏷️ Entity nodes created with MENTIONS relationships
⚠️ Breaking: syncToGraph option removed from all APIs
Show full details
Major Changes
Automatic Graph Synchronization
Graph database synchronization for facts and entities is now automatic when CORTEX_GRAPH_SYNC=true:
- Facts stored via
remember(),facts.store(), orfacts.revise()automatically sync to graph - Entity nodes are created from
fact.entitiesarray - MENTIONS relationships link Fact nodes to Entity nodes
- Predicate-based relationships (e.g., WORKS_AT, KNOWS) created from
fact.relations - SUPERSEDES relationships created when belief revision supersedes facts
- Graph sync is gated entirely by environment variable
Breaking Change
The syncToGraph option has been removed from all APIs:
// Before (v0.28.x)
await cortex.facts.store(params, { syncToGraph: true });
// After (v0.29.0+) - automatic when CORTEX_GRAPH_SYNC=true
await cortex.facts.store(params);
APIs affected:
cortex.facts.store(),update(),delete()cortex.memory.remember(),forget(),delete()cortex.vector.store(),update(),delete()cortex.conversations.create(),addMessage(),delete()cortex.contexts.create(),update(),delete()cortex.memorySpaces.register()
Migration: Remove { syncToGraph: true/false } from all API calls. Set CORTEX_GRAPH_SYNC=true in your environment to enable graph sync.
Technical Details
BeliefRevisionService.executeDecision()now callssyncFactToGraph()andsyncFactRelationships()after all fact operations- All layer APIs check
if (this.graphAdapter)instead ofif (options?.syncToGraph && this.graphAdapter) - Graph sync is non-blocking - failures are logged but don't fail the main operation
Vercel AI Provider v0.29.0 · Jan 10, 2026
Automatic Graph Sync Compatibility
🔗 Removed deprecated syncToGraph option from all memory operations
📊 Graph sync now automatic when enableGraphMemory=true with configured adapter
🧪 Updated tests to reflect new automatic sync behavior
Show full details
Breaking Changes
- Removed
syncToGraphoption - Graph sync is now automatic:// Before (v0.28.x)
await cortexMemory.remember("Hello", "Hi", { syncToGraph: true });
// After (v0.29.0+) - automatic when enableGraphMemory is configured
await cortexMemory.remember("Hello", "Hi");
Changed
CortexMemoryProvider.doGenerate()no longer passessyncToGraphtoremember()CortexMemoryProvider.doStream()no longer passessyncToGraphtorememberStream()- Manual
remember()method no longer forwardssyncToGraphoption ManualRememberOptions.syncToGraphmarked as deprecated (ignored if passed)
Fixed
- Quickstart template: Fixed neo4j-driver/rxjs bundling issues with Next.js
- Quickstart template: Added proper webpack externals configuration
- Quickstart template: Fixed AI SDK v6 type compatibility in memory-agent.ts
CLI v0.29.0 · Jan 10, 2026
Enhanced Update Display & Graph Template Improvements
📊 Version transitions now show current → latest format
🎨 Color-coded display: green (up-to-date), yellow (outdated)
🔗 Basic template now uses async initialization for graph support
✅ Accurate graph status messages based on actual configuration
Show full details
Changed
-
Update command version display -
cortex updatenow shows version transitions clearly:- Up-to-date packages shown in green:
1.0.0 - Outdated packages show transition:
1.0.0 → 1.1.0(yellow → green) - Not installed packages shown dimmed (update only affects installed packages)
- Applies to all package types: SDK, Provider, Convex, Vercel AI
- Works in both multi-deployment dashboard and single deployment/app views
- Up-to-date packages shown in green:
-
Basic template graph initialization - Now uses
Cortex.create()for automatic graph configuration:- Async initialization enables auto-configuration from environment variables
CORTEX_GRAPH_SYNC=truewithNEO4J_URIorMEMGRAPH_URIauto-connects graph adapter- Graph sync is automatic on
remember()calls (nosyncToGraphoption needed)
Fixed
-
Dev-linked apps now refresh on update - In dev mode, apps with
file:...references are no longer skipped:- Previously, dev-linked apps showed "Everything is up to date!" even when source changed
- Now always runs
npm installto pick up local SDK source changes - Fixes scenario: bump local SDK version →
cortex update --dev→ changes reflected
-
Accurate graph status messages - Template now checks for both flag AND URI:
- Shows
✓ Graph memory connected (auto-sync active)when fully configured - Shows
ℹ Graph sync enabled but no database URI configuredwhen URI missing - Previously showed misleading "enabled" message even without database URI
- Shows
Example Output
● my-deployment
Path: /path/to/project
SDK: 0.28.0 → 0.29.0
Convex: 1.31.0
CLI v0.28.1 · Jan 6, 2026
Automatic Shell Tab Completion
⌨️ Auto-installs completions for zsh, bash, and fish shells
✨ Dynamic completion for deployment and app names
🧹 Clean removal on uninstall via preuninstall script
Show full details
Added
- Automatic shell tab completion - Tab completion auto-installs during
npm install -g @cortexmemory/cli:- Completes all commands, subcommands, and options with descriptions
- Dynamic completion of deployment and app names from
~/.cortexrc - Completion scripts installed to
~/.cortex/completions/ - Source line added automatically to shell RC files (idempotent)
- Clean removal via preuninstall script on
npm uninstall -g - Manual fallback:
cortex completion <zsh|bash|fish>outputs completion script
Changed
- Package version display now shows "Vercel AI" instead of "AI" for clarity in
cortex updateoutput
TypeScript SDK v0.28.0 · Jan 5, 2026
Basic Template & Query Performance Fixes
🎯 New headless template with dual CLI/server modes
⚡ Fixed "too many bytes" error in stats queries
🧪 Complete test suite with E2E coverage
Show full details
New Basic Template
Complete headless demo of Cortex Memory SDK with both CLI and HTTP server modes:
- Dual-mode operation - Interactive CLI (
npm start) or REST API server (npm run server) - Optional LLM integration - Works with or without OpenAI API key
- Rich console output - Animated spinners and memory orchestration visualization
- Layer observer - Real-time display of all memory layers
- Full test suite - Unit, integration, and E2E tests included
CLI Commands:
/recall <query>- Search memories without storing/facts- List all stored facts/history- Show conversation history/new- Start a new conversation/config- Show current configuration
Fixed
Query Performance - Resolved "Too many bytes read" error in agents:computeStats:
// Before: Full table scans hitting 16MB limit
const memories = await ctx.db.query("memories").collect();
// After: Indexed queries with sampling
const SAMPLE_LIMIT = 1000;
const memories = await ctx.db
.query("memories")
.withIndex("by_participantId", (q) => q.eq("participantId", args.agentId))
.take(SAMPLE_LIMIT);
- Uses proper indexes for better performance
- Limits results to 1000 per query
- Returns
isApproximate: truewhen sampled
Upgrade: Run npx convex deploy after updating
CLI v0.28.0 · Jan 5, 2026
Basic Template Tracking & Sessions Support
📁 Basic template projects now tracked in CLI config
🗃️ Sessions and factHistory tables added to db commands
📊 Database stats now cover all 13 tables
Show full details
Added
-
Basic template tracking in CLI config - Basic template projects are now registered in
cortex.config.json:cortex initautomatically registers basic projects in theappssectioncortex update --sync-templatenow works with basic template projectscortex config listshows basic projects alongside quickstart apps
-
Sessions and factHistory table support -
cortex db clearandcortex db statsnow include all Convex tables:sessions- Native session management tablefactHistory- Belief revision audit trail table- Statistics now include counts for all 13 tables
Changed
- Added
"basic"toAppTypeunion type for template app tracking cortex db clearnow clears 13 tables (was 11)
TypeScript SDK v0.27.2 · Jan 1, 2026
V6 Route Feature Parity Fix
🔧 /api/chat-v6 route now has full feature parity with v5
🧪 Comprehensive E2E tests for quickstart
✅ Fact extraction and belief revision working in v6
Show full details
Fixed
/api/chat-v6 route now has full feature parity with the v5 route using createCortexMemoryAsync:
- ✅ Memory recall (pre-call context injection)
- ✅ Memory storage (post-call conversation saving)
- ✅ Fact extraction (
enableFactExtraction) - ✅ Belief revision (superseding outdated facts)
- ✅ Embedding generation for semantic search
- ✅ Layer observer for real-time UI updates
Added
Comprehensive E2E tests for the quickstart covering:
- Fact storage verification
- Belief revision (updating preferences through conversation)
- Memory recall across conversations
- V5/V6 parity validation
- Conversation lifecycle (create, list, delete)
TypeScript SDK v0.27.1 · Jan 1, 2026
AI SDK v6 Agent Architecture Support
🤖 Full integration with Vercel AI SDK v6's ToolLoopAgent
🎯 Type-safe callOptionsSchema for runtime configuration
🔌 createMemoryPrepareCall for automatic memory injection
Show full details
New Exports
import {
createCortexCallOptionsSchema, // Type-safe call options
CortexCallOptions,
createMemoryPrepareCall, // Memory injection via prepareCall
MemoryInjectionConfig,
isV6Available, // v6 feature detection
InferAgentUIMessage, // Type inference for UI messages
} from "@cortexmemory/vercel-ai-provider";
Usage with ToolLoopAgent
import { ToolLoopAgent } from "ai";
const memoryAgent = new ToolLoopAgent({
model: "openai/gpt-4o-mini",
instructions: "You are a helpful assistant with long-term memory.",
callOptionsSchema: createCortexCallOptionsSchema(),
prepareCall: createMemoryPrepareCall({
convexUrl: process.env.CONVEX_URL!,
maxMemories: 20,
}),
});
await memoryAgent.generate({
prompt: "Hello!",
options: { userId: "u1", memorySpaceId: "app1" },
});
Auto-Detection
The quickstart automatically detects AI SDK version and routes appropriately:
- AI SDK v6: Uses
/api/chat-v6withToolLoopAgent - AI SDK v5: Uses
/api/chatwithstreamText
CLI v0.27.3 · Jan 1, 2026
Neo4j Encrypted URI Scheme Support
🔐 Graph database setup now accepts all neo4j-driver URI schemes
🔒 Support for TLS with system CA validation (+s suffix)
📜 Support for self-signed certificates (+ssc suffix)
Show full details
Added
- Neo4j encrypted URI scheme support - All neo4j-driver URI schemes now accepted:
bolt://,bolt+s://,bolt+ssc://(direct connections)neo4j://,neo4j+s://,neo4j+ssc://(routing/cluster connections)+ssuffix for TLS with system CA validation+sscsuffix for TLS with self-signed certificate acceptance
Changed
- Updated Docker Compose graph configuration to support optional TLS
- Added SSL policy configuration for Neo4j bolt connector
December 2025
Python SDK v0.27.0 · Dec 28, 2025
Multi-Tenancy & Auth Context System
🔐 Complete multi-tenancy with automatic tenantId propagation
📱 New Sessions API for multi-session management
👤 User profile schemas with validation presets
Show full details
New Auth Module (cortex.auth)
from cortex.auth import create_auth_context
from cortex import AuthContext, AuthMethod
auth = create_auth_context(
user_id='user-123',
tenant_id='tenant-acme',
organization_id='org-engineering',
session_id='sess-abc',
auth_provider='auth0',
auth_method='oauth',
claims={'roles': ['admin', 'editor']},
)
cortex = Cortex(CortexConfig(
convex_url=os.getenv("CONVEX_URL"),
auth=auth,
))
# All operations automatically scoped to tenant
await cortex.memory.remember(...)
await cortex.conversations.create(...)
await cortex.facts.store(...)
Sessions API
session = await cortex.sessions.create(CreateSessionParams(
user_id='user-123',
tenant_id='tenant-456',
metadata={'device': 'Chrome on macOS'},
))
await cortex.sessions.touch(session.session_id)
active = await cortex.sessions.get_active('user-123')
await cortex.sessions.end(session.session_id)
User Profile Schemas
| Preset | Required Fields | Email Validation | Max Size |
|---|---|---|---|
strict | displayName, email | ✓ | 64KB |
standard | displayName | ✓ | 256KB |
minimal | displayName | ✗ | None |
none | None | ✗ | None |
TypeScript SDK v0.27.0 · Dec 27, 2025
Multi-Tenancy & Authentication Context
🏢 Complete multi-tenancy support for SaaS platforms
🔐 Automatic tenantId propagation across all APIs
📱 New Sessions API with governance integration
Show full details
AuthContext Integration
const cortex = new Cortex({
convexUrl: process.env.CONVEX_URL,
auth: {
userId: "user-123",
tenantId: "tenant-acme",
sessionId: "sess-abc",
authMethod: "clerk",
authenticatedAt: Date.now(),
claims: { role: "admin" },
},
});
// All operations automatically scoped to tenant
await cortex.memory.remember({...});
await cortex.conversations.create({...});
await cortex.facts.store({...});
Sessions API
const session = await cortex.sessions.create({
userId: "user-123",
metadata: { device: "mobile", ip: "..." },
});
await cortex.sessions.touch(session.sessionId);
const activeSessions = await cortex.sessions.getActive("user-123");
await cortex.sessions.expireIdle({ maxIdleMs: 30 * 60 * 1000 });
Key Features
- ✅ Automatic TenantId Propagation
- ✅ Sessions API for multi-session management
- ✅ Auth Validators for format validation
- ✅ Framework-Agnostic (Auth0, Clerk, NextAuth, Firebase)
- ✅ Graph Integration with tenant boundaries
- ✅ GDPR Compatible cascade deletion
CLI v0.27.2 · Dec 28, 2025
Multi-Deployment Update Command
🔄 cortex update now checks all enabled deployments by default
📊 Color-coded version status table
🎯 Sequential updates with summary
Show full details
Added
- Multi-deployment update command -
cortex updatenow checks all enabled deployments:- Displays status table with latest SDK/Convex versions and each deployment's current versions
- Color-coded display (green = up to date, yellow = needs update)
- Prompts to confirm updating all deployments that need updates
- Sequential updates with summary at the end
-d, --deployment <name>flag for single-deployment mode
CLI v0.27.1 · Dec 27, 2025
App Lifecycle Management
🛑 Stop command detects and stops running template apps
🔍 Port-based process detection fallback
📊 Enhanced status dashboard with app information
Show full details
Added
-
App lifecycle management in
cortex stop:-a, --app <name>option to stop a specific app--apps-onlyflag to stop only apps (skip Convex/graph)- Apps tracked via PID files (
.cortex-app-{name}.pid)
-
Port-based process detection when PID files don't exist:
- Detects Convex on port 3210 for local deployments
- Detects apps by their configured port (default 3000)
-
Enhanced
cortex statusdashboard:- Displays running apps with PID and port information
- Shows detection method (via PID file or via port)
CLI v0.27.0 · Dec 26, 2025
Vercel AI Quickstart Integration
🚀 Optional demo app installation during cortex init
📱 Template apps management and tracking
⚡ Default enabled for init-created resources
Show full details
Added
-
Vercel AI Quickstart integration - Optional demo app installation:
- Installs as
/quickstartsubfolder - Full Next.js app with chat interface and real-time memory visualization
- Auto-configured with Convex URL and OpenAI API key
- Installs as
-
Template apps management:
- New
appssection in config (~/.cortexrc) - Apps shown in
cortex config list cortex startautomatically starts enabled apps
- New
-
Default enabled - Deployments and apps from
cortex initenabled by default
TypeScript SDK v0.26.1 · Dec 26, 2025
Vercel AI SDK v6.0 Support
✅ Extended peer dependencies to accept ai v6.x
🔧 No breaking changes - fully backward compatible
Show full details
Changed
- Extended peerDependencies -
@cortexmemory/vercel-ai-providernow acceptsaiversions^3.0.0 || ^4.0.0 || ^5.0.0 || ^6.0.0 - Users on Vercel AI SDK v6.x will no longer see peer dependency warnings
Create v0.26.0 · Dec 23, 2025
Cortex CLI Integration
🔧 Optional @cortexmemory/cli installation during setup
📜 Adds CLI scripts to generated package.json
✨ Enhanced success messages with CLI commands
Show full details
Added
-
Cortex CLI Integration:
- New optional step to install
@cortexmemory/cliduring project setup - Automatic CLI installation as dev dependency when selected
- Adds scripts:
npm run cortex,cortex:setup,cortex:stats,cortex:spaces
- New optional step to install
-
User Experience:
- Updated configuration summary shows CLI installation status
- Enhanced success message with CLI commands when installed
TypeScript SDK v0.26.0 · Dec 23, 2025
Enhanced Belief Revision - Subject+FactType Matching
🧠 New pipeline stage catches conflicts missed by pattern matching
🔋 "Batteries included" mode - works without LLM configuration
🔧 Fixed SUPERSEDE and UPDATE actions
Show full details
New Pipeline Stage
NEW FACT → [Slot Match] → [Semantic Match] → [Subject+Type Match] → [LLM/Heuristic] → Execute
│
Same subject AND factType?
→ Candidate for review
Key Improvements
- ✅ Subject+FactType Matching (Stage 2.5) - Catches conflicts with same subject AND factType
- ✅ Batteries-Included Mode - Works WITHOUT LLM using
getDefaultDecision()heuristics - ✅ Fixed SUPERSEDE Action - Now uses
facts.supersedemutation properly - ✅ Fixed UPDATE Action - Uses
updateInPlaceto avoid creating unwanted versions
// Works WITHOUT LLM configuration
const cortex = new Cortex({ convexUrl: "..." });
await cortex.memory.remember({
memorySpaceId: "user-space",
userMessage: "Actually, I prefer purple now",
agentResponse: "Got it!",
userId: "user-123",
});
// Old "blue" fact properly SUPERSEDED
Python SDK v0.26.0 · Dec 23, 2025
OrchestrationObserver API
📊 Real-time monitoring of remember() and remember_stream() pipeline
🐛 Fixed user_id propagation in fact extraction
🧠 Subject+FactType matching for belief revision
Show full details
OrchestrationObserver
class MyObserver:
def on_orchestration_start(self, orchestration_id: str) -> None:
print(f"Starting: {orchestration_id}")
def on_layer_update(self, event: LayerEvent) -> None:
print(f"Layer {event.layer}: {event.status} ({event.latency_ms}ms)")
def on_orchestration_complete(self, summary: OrchestrationSummary) -> None:
print(f"Done in {summary.total_latency_ms}ms")
result = await cortex.memory.remember(
RememberParams(..., observer=MyObserver())
)
Bug Fixes
- Fixed
user_id,participant_id, andsource_refnot propagating to facts during belief revision - Fixed SUPERSEDE action to use dedicated
facts:supersedemutation - Fixed UPDATE action to use
facts:updateInPlace
CLI v0.26.2 · Dec 25, 2025
Non-Interactive Convex Setup
🤖 Init wizard sets up Convex without interactive prompts
🔄 Three streamlined setup paths
🔑 Automatic Convex login handling
Show full details
Added
-
Non-interactive Convex setup in
cortex init:- Automatically detects Convex authentication status
- Retrieves team slug automatically from login status
- Uses Convex CLI flags for seamless setup
-
Three streamlined setup paths:
- Local development - Cloud project with local backend (recommended)
- Cloud project - New cloud deployment with full features
- Existing project - Connect to existing Convex deployment
CLI v0.26.1 · Dec 25, 2025
Environment Variable Fixes
🔧 Fixed cortex dev overwriting local deployment configs
🔑 Fixed OpenAI API key not saved during init
Show full details
Fixed
cortex devoverwriting.env.local- InheritedCONVEX_*variables were polluting child processes- OpenAI API key not saved - Init wizard now correctly saves the configured key
CLI v0.26.0 · Dec 23, 2025
Secure Password Generation
🔐 Cryptographically secure passwords for Neo4j/Memgraph
🔑 OpenAI API key setup during init
Show full details
Added
- Secure password generation -
cortex initgenerates 20-character passwords for graph databases - OpenAI API key setup - New optional step with
sk-prefix validation
TypeScript SDK v0.24.0 · Dec 20, 2025
Belief Revision System
🧠 Intelligent fact management preventing duplicates
🔄 Slot-based, semantic, and LLM-based conflict resolution
📜 Complete audit trail with fact history
Show full details
The Problem Solved
Previously, fact storage was append-only:
- Conflicting facts accumulated
- No semantic understanding of when facts should update vs. add
- No history of how knowledge evolved
Now with Belief Revision
const result = await cortex.facts.revise({
memorySpaceId: "user-123-space",
fact: {
fact: "User prefers purple",
subject: "user-123",
predicate: "favorite color",
object: "purple",
confidence: 90,
},
});
console.log(result.action); // "SUPERSEDE"
console.log(result.reason); // "Color preference has changed"
New API Methods
| Method | Purpose |
|---|---|
facts.revise() | Full belief revision pipeline |
facts.checkConflicts() | Preview conflicts without executing |
facts.supersede() | Manually supersede one fact with another |
facts.history() | Get change history for a fact |
facts.getSupersessionChain() | Get lineage of fact versions |
facts.getActivitySummary() | Analytics on fact changes |
Python SDK v0.24.0 · Dec 19, 2025
Belief Revision System
🧠 Intelligent fact management with conflict resolution
🔄 Pipeline: Slot matching → Semantic → LLM resolution
📜 Fact history and audit trail
Show full details
Usage
result = await cortex.facts.revise(ReviseParams(
memory_space_id="agent-1",
fact=ConflictCandidate(
fact="User prefers purple",
subject="user-123",
predicate="favorite color",
object="purple",
confidence=90,
),
))
print(f"Action: {result.action}") # SUPERSEDE
print(f"Reason: {result.reason}") # "Color preference has changed"
Available Actions
| Action | When Used |
|---|---|
| ADD | Genuinely new information |
| UPDATE | Refines existing fact |
| SUPERSEDE | Replaces contradictory fact |
| NONE | Already captured |
TypeScript SDK v0.23.0 · Dec 19, 2025
Unified Context Retrieval with recall()
🔍 The retrieval counterpart to remember()
🎯 Get LLM-ready context from all memory layers
🔗 Graph expansion for discovering related context
Show full details
Usage
const result = await cortex.memory.recall({
memorySpaceId: "user-123-space",
query: "user preferences",
});
// Inject directly into LLM prompt
const response = await llm.chat({
messages: [
{ role: "system", content: `Context:\n${result.context}` },
{ role: "user", content: userMessage },
],
});
Features
- ✅ Batteries Included - All sources enabled by default
- ✅ Graph Expansion - Discovers related context via relationships
- ✅ Unified Deduplication - Removes duplicates across sources
- ✅ Multi-Signal Ranking - Semantic similarity, confidence, importance, recency
- ✅ LLM-Ready Formatting - Structured markdown context
Python SDK v0.23.0 · Dec 19, 2025
recall() Orchestration API
🔮 Unified context retrieval counterpart to remember()
🎯 Multi-signal ranking with configurable weights
📊 Source breakdown in results
Show full details
Usage
result = await cortex.memory.recall(
RecallParams(
memory_space_id="agent-1",
query="user preferences",
)
)
# Use directly in LLM prompts
print(result.context)
Ranking Algorithm
| Signal | Weight | Description |
|---|---|---|
| Semantic | 35% | Vector similarity score |
| Confidence | 20% | Fact confidence (0-100) |
| Importance | 15% | Memory importance (0-100) |
| Recency | 15% | Time decay (30-day half-life) |
| Graph Connectivity | 15% | Connected entity count |
TypeScript SDK v0.22.0 · Dec 19, 2025
Cross-Session Fact Deduplication
🎯 Facts no longer duplicated across conversations
🔄 Three deduplication strategies: semantic, structural, exact
⬆️ Confidence-based updates
Show full details
The Problem Solved
Session 1: "My name is Alice" → Fact created ✅
Session 2: "I'm Alice" → Duplicate detected, skipped ✅
Session 3: "Call me Alice" → Duplicate detected, skipped ✅
Result: 1 fact instead of 3!
Deduplication Strategies
| Strategy | Speed | Accuracy |
|---|---|---|
semantic | Slower | Highest |
structural | Fast | Medium |
exact | Fastest | Low |
Configuration
await cortex.memory.remember({
...params,
factDeduplication: "semantic", // Default
});
Python SDK v0.22.0 · Dec 19, 2025
Cross-Session Fact Deduplication
🎯 Automatic duplicate fact prevention
🔄 Configurable deduplication strategies
📈 Confidence-based updates for higher-quality facts
Show full details
The Solution
# Deduplication is ON by default
await cortex.memory.remember(
RememberParams(
memory_space_id="agent-1",
conversation_id="conv-123",
user_message="I'm Alex",
agent_response="Nice to meet you!",
user_id="user-123",
user_name="Alex",
agent_id="assistant",
extract_facts=my_fact_extractor,
)
)
Strategies
| Strategy | Speed | Accuracy |
|---|---|---|
none | ⚡ Fastest | None |
exact | ⚡ Fast | Low |
structural | ⚡ Fast | Medium |
semantic | 🐢 Slower | High |
November 2025
Vercel AI Provider v0.2.0 · Nov 24, 2025
Enhanced Streaming with rememberStream()
🚀 Direct integration with rememberStream() API
📊 Comprehensive streaming metrics
🔄 Progressive fact extraction and graph sync
Show full details
New Features
- Progressive Storage - Store partial responses during streaming
- Streaming Hooks -
onChunk,onProgress,onError,onComplete - Comprehensive Metrics - First chunk latency, throughput, estimated costs
- Progressive Fact Extraction - Extract facts incrementally during streaming
- Progressive Graph Sync - Sync to graph databases during streaming
- Error Recovery - Resume tokens and partial failure strategies
Configuration
const cortexMemory = createCortexMemory({
convexUrl: process.env.CONVEX_URL!,
memorySpaceId: "my-chat",
userId: "user-123",
streamingOptions: {
storePartialResponse: true,
progressiveFactExtraction: true,
},
streamingHooks: {
onProgress: (event) => console.log(event),
},
});
Create v0.2.0 · Nov 24, 2025
Smart Version Detection
🔍 Auto-fetches latest SDK version from npm
🔄 Dynamic Convex version sync with peer dependencies
✨ Graceful fallback to safe defaults
Show full details
Added
- CLI now automatically fetches latest SDK version from npm registry
- Dynamically detects correct Convex version from SDK's peerDependencies
- Template always uses
"latest"for SDK
Vercel AI Provider v0.1.0 · Nov 5, 2025
Initial Release
🎉 First release of Cortex Memory Provider for Vercel AI SDK
🔌 Works with OpenAI, Anthropic, Google, Groq
🌐 Edge runtime compatible
Show full details
Core Features
createCortexMemory()- Factory function for memory-augmented models- Automatic memory search before each LLM call
- Automatic memory storage after each response
- Works with streaming and non-streaming responses
Memory Management
cortexMemory.search()- Manual memory searchcortexMemory.remember()- Manual memory storagecortexMemory.getMemories()- Retrieve all memoriescortexMemory.clearMemories()- Delete memories
Create v0.1.0 · Nov 2, 2025
Deprecated: The create-cortex-memories package has been superseded by @cortexmemory/cli. Use npx @cortexmemory/cli init instead.
Initial Release
🎉 Interactive CLI wizard for Cortex project setup
🐳 Docker integration for Neo4j/Memgraph
📦 Complete project scaffolding
Show full details
Features
→ Usenpm create cortex-memoriesnpx @cortexmemory/cli init- Three Convex setup modes (local/new cloud/existing)
- Optional graph database integration (Neo4j/Memgraph)
- Docker detection with platform-specific instructions
- Automatic dependency installation
- Backend function deployment
CLI v0.1.0 · Nov 29, 2025
Initial Release
🎉 First release of @cortexmemory/cli
📋 Complete command suite for memory management
🔧 Multi-deployment support
Show full details
Core Commands
- Memory Operations -
memory list,search,delete,export,stats - User Management -
users list,get,delete,export, GDPR cascade deletion - Memory Spaces -
spaces list,create,delete,archive, participants management - Facts Operations -
facts list,search,get,delete,export - Conversations -
conversations list,get,delete,export - Convex Management -
convex status,deploy,dev,logs,dashboard - Database Operations -
db stats,clear,backup,restore,export
Features
- Configuration management with multiple deployment support
- Table, JSON, and CSV output formats
- Interactive confirmations for dangerous operations
- Dry-run mode for previewing changes
January 2026
v0.31.1 · Jan 20, 2026 - Dependency Updates
📦 Updated all dependencies to latest versions
v0.31.0 · Jan 14, 2026 - Configurable Recall Limits
⚙️ RecallLimits interface for granular control
🔧 Environment variable + per-call configuration
🛡️ Prevents Convex 16MB read limit errors
v0.30.1 · Jan 14, 2026 - Enriched Entity Extraction
🏷️ LLM extraction returns typed entities
🔗 EXTRACTED_WITH edge for bidirectional traversal
📊 Entity relations sync to graph as typed edges
v0.30.0 · Jan 13, 2026 - Semantic Fact Search
🔍 Native vector search on facts table
🔋 Batteries-included embeddings
🧪 Extreme multi-turn stress tests
v0.29.1 · Jan 10, 2026 - Belief Revision Heuristics
🧠 Improved decision accuracy without LLM
v0.29.0 · Jan 9, 2026 - Automatic Graph Sync
🔗 Graph sync automatic with CORTEX_GRAPH_SYNC=true
⚠️ Breaking: syncToGraph option removed
v0.28.0 · Jan 5, 2026 - Basic Template & Query Fixes
🎯 New headless template with dual CLI/server modes
⚡ Fixed "too many bytes" error in stats queries
🧪 Complete test suite with E2E coverage
v0.27.2 · Jan 1, 2026 - V6 Route Feature Parity
🔧 /api/chat-v6 route now has full feature parity with v5
v0.27.1 · Jan 1, 2026 - AI SDK v6 Agent Support
🤖 Full integration with ToolLoopAgent
🎯 Type-safe callOptionsSchema
🔌 createMemoryPrepareCall for memory injection
December 2025
v0.27.0 · Dec 27 - Multi-Tenancy & Auth Context
🏢 Complete multi-tenancy for SaaS platforms
🔐 Automatic tenantId propagation
📱 New Sessions API
v0.26.1 · Dec 26 - Vercel AI SDK v6.0 Support
✅ Extended peer dependencies to accept ai v6.x
v0.26.0 · Dec 23 - Enhanced Belief Revision
🧠 Subject+FactType matching (Stage 2.5)
🔋 "Batteries included" mode
🔧 Fixed SUPERSEDE and UPDATE actions
v0.24.0 · Dec 20 - Belief Revision System
🧠 Intelligent fact management
📜 Complete audit trail
v0.23.0 · Dec 19 - recall() API
🔍 Unified context retrieval
🎯 LLM-ready context generation
v0.22.0 · Dec 19 - Cross-Session Fact Deduplication
🎯 Automatic duplicate prevention
January 2026
v0.31.1 · Jan 20, 2026 - Dependency Updates
📦 Updated all dependencies to latest versions
🔄 Version sync with TypeScript SDK
December 2025
v0.27.0 · Dec 28 - Multi-Tenancy & Auth Context
🔐 Complete multi-tenancy with tenantId propagation
📱 New Sessions API
👤 User profile schemas with validation
v0.26.0 · Dec 23 - OrchestrationObserver API
📊 Real-time pipeline monitoring
🐛 Fixed user_id propagation
🧠 Subject+FactType matching
v0.24.0 · Dec 19 - Belief Revision System
🧠 Intelligent fact management
🔄 Pipeline: Slot → Semantic → LLM
📜 Fact history and audit trail
v0.23.0 · Dec 19 - recall() API
🔮 Unified context retrieval
🎯 Multi-signal ranking
v0.22.0 · Dec 19 - Cross-Session Fact Deduplication
🎯 Automatic duplicate prevention
🔄 Configurable strategies
January 2026
v0.32.0 · Jan 20, 2026 - Auto-Discovery of Apps
🔍 cortex update auto-discovers unregistered apps in deployment directories
📝 Prompts to register discovered apps for template sync
🚀 Apps created before the registration system are now detected
v0.31.1 · Jan 20, 2026 - Dependency Updates
📦 Updated all dependencies to latest versions
v0.29.0 · Jan 10 - Enhanced Update Display & Graph Template
📊 Version transitions show current → latest
🎨 Color-coded status display
🔗 Async graph initialization in basic template
v0.28.1 · Jan 6 - Shell Tab Completion
⌨️ Auto-installs for zsh, bash, fish
✨ Dynamic completion for deployments
v0.28.0 · Jan 5 - Basic Template Tracking
📁 Basic projects tracked in config
🗃️ Sessions and factHistory support
v0.27.3 · Jan 1 - Neo4j Encrypted URIs
🔐 All neo4j-driver URI schemes supported
December 2025
v0.27.2 · Dec 28 - Multi-Deployment Updates
🔄 cortex update checks all deployments
📊 Color-coded version status
v0.27.1 · Dec 27 - App Lifecycle Management
🛑 Stop running template apps
🔍 Port-based process detection
v0.27.0 · Dec 26 - Quickstart Integration
🚀 Optional demo app installation
📱 Template apps management
v0.26.2 · Dec 25 - Non-Interactive Convex Setup
🤖 Streamlined setup paths
v0.26.1 · Dec 25 - Environment Fixes
🔧 Fixed .env.local overwrites
v0.26.0 · Dec 23 - Secure Passwords
🔐 Cryptographically secure passwords
November 2025
v0.1.0 · Nov 29 - Initial Release
🎉 Complete CLI for Cortex memory management
January 2026
v0.32.0 · Jan 20, 2026 - Reasoning Panel Nativity
🧠 New createLayerStreamObserver() helper for server-side layer streaming
⚛️ New useLayerTracking React hook for client-side state management
📦 New /react subpath export: @cortexmemory/vercel-ai-provider/react
📉 Reduces memory visualization boilerplate from ~170 lines to ~6 lines
🚀 Quickstart template updated to use new helpers (cortex update --sync-template)
v0.29.1 · Jan 20, 2026 - Dependency Updates
📦 Updated all dependencies to latest versions
v0.29.0 · Jan 10 - Automatic Graph Sync
🔗 Removed deprecated syncToGraph option
📊 Graph sync automatic with enableGraphMemory
🛠️ Fixed quickstart bundling issues
November 2025
v0.2.0 · Nov 24 - Enhanced Streaming
🚀 Direct rememberStream() integration
📊 Streaming metrics
🔄 Progressive fact extraction
v0.1.0 · Nov 5 - Initial Release
🎉 Cortex Memory Provider for Vercel AI SDK
🔌 Works with OpenAI, Anthropic, Google, Groq
Versioning Policy
Cortex packages follow semantic versioning:
- Major (X.0.0): Breaking API changes
- Minor (0.X.0): New features, backwards compatible
- Patch (0.0.X): Bug fixes, backwards compatible
Deprecation Policy
- Features are marked deprecated for at least one minor version before removal
- Deprecated features include migration guides
- Breaking changes are documented with upgrade paths
Roadmap
Planned for v1.0.0
- Complete API stabilization
- Integration examples for all major frameworks
- Real-time graph sync worker
- MCP Server implementation
- Cloud Mode with Graph-Premium