Memory Operations API
Last Updated: 2026-01-13
Complete API reference for memory operations across memory spaces.
Enhanced in v0.15.0: memory.rememberStream() with progressive storage, streaming hooks, and comprehensive metrics
New in v0.15.0: Enriched fact extraction with enrichedContent and factCategory for bullet-proof semantic search
Important: Timestamp Convention
All timestamps in the Cortex SDK are Unix timestamps in milliseconds (not JavaScript Date objects):
// SDK returns timestamps as numbers (Unix ms)
const memory = await cortex.vector.get(memorySpaceId, memoryId);
console.log(memory.createdAt); // 1735689600000 (number, not Date)
console.log(memory.updatedAt); // 1735689600000 (number, not Date)
console.log(memory.sourceTimestamp); // 1735689600000 (number, not Date)
// Convert to Date for display
const createdDate = new Date(memory.createdAt);
// When providing timestamps in filters/queries
await cortex.vector.getAtTimestamp(
memorySpaceId,
memoryId,
Date.now() - 24 * 60 * 60 * 1000, // 24 hours ago (number)
);
// Date objects are also accepted and auto-converted
await cortex.vector.getAtTimestamp(
memorySpaceId,
memoryId,
new Date("2025-08-01"), // Converted to Unix ms internally
);
Why Unix milliseconds? Convex (the backend) stores timestamps as number. Using consistent types avoids serialization issues and timezone bugs.
Overview
The Memory Operations API is organized into namespaces corresponding to Cortex's complete architecture:
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
// Layer 1: Three ACID Stores (Immutable Sources of Truth)
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
cortex.conversations.* // Layer 1a: Conversations (memorySpace-scoped)
cortex.immutable.* // Layer 1b: Shared immutable (NO memorySpace - TRULY shared)
cortex.mutable.* // Layer 1c: Shared mutable (NO memorySpace - TRULY shared)
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
// Layer 2: Vector Index (memorySpace-scoped, Searchable)
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
cortex.vector.* // Vector memory operations (memorySpace-scoped)
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
// Layer 3: Facts Store (memorySpace-scoped, Versioned) - NEW
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
cortex.facts.* // LLM-extracted facts (memorySpace-scoped)
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
// Layer 4: Convenience API (Wrapper over L1a + L2 + L3)
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
cortex.memory.* // Primary interface (recommended)
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
// Additional APIs
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
cortex.memorySpaces.* // Memory space management (Hive/Collaboration)
cortex.users.* // User profiles (shared across all spaces)
cortex.contexts.* // Context chains (cross-space support)
cortex.a2a.* // Inter-space messaging (Collaboration Mode)
cortex.governance.* // Retention policies
cortex.graph.* // Graph database integration
Complete Architecture:
User↔Agent • Agent↔Agent • Hive/Collab
KB Articles • Policies • Audit Logs
Inventory • Config • Counters
Semantic search • References L1 via Ref fields • Versioned with retention rules
60-90% token savings • cortex.facts.* • Enables infinite context
remember() → L1a + L2 + L3 + graph • recall() → L2 + L3 + graph • search() → L2 + enrichment
Entities from memories, facts, contexts • Multi-hop traversal • Complex relationships
Which layer/API to use:
cortex.memory.* (Layer 4) - Recommended for most use cases. Provides remember() / recall() for full orchestration, and search() / get() for quick retrieval.
| Namespace | Layer | Description |
|---|---|---|
cortex.memory.* | Layer 4 | START HERE - Full orchestration (recommended) |
cortex.conversations.* | Layer 1a | Direct ACID conversation access |
cortex.vector.* | Layer 2 | Direct vector index control |
cortex.facts.* | Layer 3 | Direct fact operations |
cortex.immutable.* | Layer 1b | Shared knowledge (NO memorySpace - TRULY shared) |
cortex.mutable.* | Layer 1c | Live mutable data (NO memorySpace - TRULY shared) |
cortex.users.* | — | User profiles (shared across ALL spaces + GDPR cascade) |
cortex.governance.* | — | Retention policies for all layers |
GDPR Compliance:
All stores support optional userId field to enable cascade deletion:
// Stores with userId can be deleted via cortex.users.delete(userId, { cascade: true })
await cortex.conversations.addMessage(convId, { userId: 'user-123', ... });
await cortex.immutable.store({ type: 'feedback', id: 'fb-1', userId: 'user-123', ... });
await cortex.mutable.set('sessions', 'sess-1', data, 'user-123');
await cortex.vector.store('user-123-personal', { userId: 'user-123', ... });
// One call deletes from ALL stores
await cortex.users.delete('user-123', { cascade: true });
Multi-Tenancy Support:
All stores support optional tenantId field for SaaS multi-tenancy isolation:
// When initializing Cortex with auth context, tenantId is auto-injected
const cortex = new Cortex({
convexUrl: process.env.CONVEX_URL,
auth: {
userId: 'user-123',
tenantId: 'tenant-acme', // All operations scoped to this tenant
authMethod: 'clerk',
authenticatedAt: Date.now(),
}
});
// TenantId automatically propagates to all operations:
await cortex.memory.remember({...}); // tenantId: 'tenant-acme'
await cortex.conversations.create({...}); // tenantId: 'tenant-acme'
await cortex.facts.store({...}); // tenantId: 'tenant-acme'
await cortex.immutable.store({...}); // tenantId: 'tenant-acme'
await cortex.mutable.set(...); // tenantId: 'tenant-acme'
// Queries are automatically filtered by tenant
const memories = await cortex.memory.search('user-space', 'query');
// Only returns data from 'tenant-acme'
See Auth Integration for complete multi-tenancy documentation.
Three-Namespace Architecture
Layer 1: cortex.conversations.* (ACID)
// Managing immutable conversation threads
await cortex.conversations.create({ type: 'user-agent', participants: {...} });
await cortex.conversations.addMessage(conversationId, message);
await cortex.conversations.get(conversationId);
await cortex.conversations.getHistory(conversationId, options);
// Returns raw messages, no Vector index involved
Layer 2: cortex.vector.* (Vector Index)
// Managing searchable knowledge index
await cortex.vector.store(memorySpaceId, vectorInput); // Must provide conversationRef manually
await cortex.vector.get(memorySpaceId, memoryId);
await cortex.vector.search(memorySpaceId, query, options);
await cortex.vector.update(memorySpaceId, memoryId, updates);
// Direct Vector operations, you manage conversationRef
Layer 4: cortex.memory.* (Convenience API)
// High-level operations that manage both layers
await cortex.memory.remember(params); // Stores in ACID + creates Vector index
await cortex.memory.get(memorySpaceId, memoryId, { includeConversation: true });
await cortex.memory.search(memorySpaceId, query, { enrichConversation: true });
// Handles both layers automatically
Storage Flow Comparison
Returns: msg.id
Provide conversationRef manually (use msg.id from Step 1)
Handles L1a + L2 + L3 + graph linking automatically
Manual Flow (Layer 1 + Layer 2)
For conversation-based memories:
// Step 1: Store raw message in ACID (Layer 1)
const msg = await cortex.conversations.addMessage("conv-456", {
role: "user",
text: "The password is Blue",
userId: "user-123",
timestamp: new Date(),
});
// Returns: { id: 'msg-789', ... }
// Step 2: Index in Vector (Layer 2) - references Step 1
const memory = await cortex.vector.store("user-123-personal", {
content: "The password is Blue", // Raw or extracted
contentType: "raw",
embedding: await embed("The password is Blue"), // Optional
userId: "user-123",
source: {
type: "conversation",
userId: "user-123",
userName: "Alex Johnson",
timestamp: new Date(),
},
conversationRef: {
// Links to ACID
conversationId: "conv-456",
messageIds: [msg.id], // From Step 1
},
metadata: {
importance: 100,
tags: ["password", "security"],
},
});
Convenience Flow (Layer 4 - recommended)
Use cortex.memory.* to handle both layers automatically:
// Does both ACID + Vector in one call
const result = await cortex.memory.remember({
memorySpaceId: "agent-1",
conversationId: "conv-456",
userMessage: "The password is Blue",
agentResponse: "I'll remember that!",
userId: "user-123",
userName: "Alex",
});
// Behind the scenes (Layer 4 does this):
// 1. cortex.conversations.addMessage() × 2 (ACID Layer 1)
// 2. cortex.vector.store() × 2 (Vector Layer 2)
// 3. Links them via conversationRef
Non-Conversation Memories
For system/tool memories (no ACID conversation):
// Option 1: Use Layer 2 directly
const memory = await cortex.vector.store("user-123-personal", {
content: "Agent initialized successfully",
contentType: "raw",
source: { type: "system", timestamp: new Date() },
// No conversationRef - not from a conversation
metadata: { importance: 30, tags: ["system", "startup"] },
});
// Option 2: Use Layer 4 (also works for non-conversations)
const memory = await cortex.memory.store("user-123-personal", {
content: "Agent initialized successfully",
source: { type: "system" },
metadata: { importance: 30 },
});
// Layer 4 detects source.type='system' and skips ACID storage
Layer 1 Reference Rules
| source.type | Typical Ref | Why |
|---|---|---|
conversation | conversationRef | Links to private conversation (Layer 1a) |
a2a | conversationRef | Links to A2A conversation (Layer 1a) |
system | immutableRef or none | May link to immutable data (Layer 1b) or standalone |
tool | immutableRef or none | May link to immutable audit log (Layer 1b) or standalone |
Reference Types:
- conversationRef - Links to Layer 1a (private conversations)
- immutableRef - Links to Layer 1b (shared knowledge/policies)
- mutableRef - Links to Layer 1c (live data snapshot)
- None - Standalone Vector memory (no Layer 1 source)
Notes:
- References are mutually exclusive (only one per memory)
- All references are optional
- conversationRef required for
source.type='conversation'(unless opt-out) - immutableRef/mutableRef used when indexing shared data
Complete API Reference by Namespace
Layer 1: cortex.conversations.* Operations
| Operation | Purpose | Returns |
|---|---|---|
create(params) | Create new conversation | Conversation |
get(conversationId) | Get conversation | Conversation |
addMessage(conversationId, message) | Add message to ACID | Message |
getHistory(conversationId, options) | Get message thread | Message[] |
list(filters) | List conversations | Conversation[] |
search(query, filters) | Search conversations | SearchResult[] |
count(filters) | Count conversations | number |
export(filters, options) | Export conversations | JSON/CSV |
delete(conversationId) | Delete conversation | DeletionResult |
See: Conversation Operations API
Layer 2: cortex.vector.* Operations
| Operation | Purpose | Returns |
|---|---|---|
store(memorySpaceId, input, options?) | Store vector memory | MemoryEntry |
get(memorySpaceId, memoryId) | Get vector memory | MemoryEntry | null |
search(memorySpaceId, query, options?) | Search vector index | MemoryEntry[] |
update(memorySpaceId, memoryId, updates) | Update memory (creates version) | MemoryEntry |
delete(memorySpaceId, memoryId, options?) | Delete from vector | { deleted: boolean; memoryId: string } |
updateMany(filter, updates) | Bulk update | { updated: number; memoryIds: string[] } |
deleteMany(filter) | Bulk delete | { deleted: number; memoryIds: string[] } |
count(filter) | Count memories | number |
list(filter) | List memories | MemoryEntry[] |
export(options) | Export vector memories | { format: string; data: string; count: number; exportedAt: number } |
archive(memorySpaceId, memoryId) | Soft delete (single memory) | { archived: boolean; memoryId: string; restorable: boolean } |
restoreFromArchive(memorySpaceId, memoryId) | Restore from archive | { restored: boolean; memoryId: string; memory: MemoryEntry } |
getVersion(memorySpaceId, memoryId, version) | Get specific version | MemoryVersion | null |
getHistory(memorySpaceId, memoryId) | Get version history | MemoryVersion[] |
getAtTimestamp(memorySpaceId, memoryId, timestamp) | Temporal query | MemoryVersion | null |
All methods use the resilience layer (if configured) for automatic retries and circuit breaking.
Layer 4: cortex.memory.* Operations (Convenience API)
| Operation | Purpose | Returns | Does |
|---|---|---|---|
remember(params) | Store conversation | RememberResult | ACID + Vector |
get(memorySpaceId, memoryId, options) | Get memory + conversation | EnrichedMemory | Vector + optional ACID |
search(memorySpaceId, query, options) | Search + enrich | EnrichedMemory[] | Vector + optional ACID |
store(memorySpaceId, input) | Smart store | StoreMemoryResult | Detects layer automatically |
update(memorySpaceId, memoryId, updates) | Update memory | UpdateMemoryResult | Vector (creates version) |
delete(memorySpaceId, memoryId, options) | Delete memory | DeleteMemoryResult | Vector only (preserves ACID) |
forget(memorySpaceId, memoryId, options) | Delete both layers | ForgetResult | Vector + optionally ACID |
list(filter) | List memories | MemoryEntry[] | Filter-based listing |
| All vector operations | Same as Layer 2 | Same | Convenience wrappers |
Key Differences:
| Operation | Layer 2 (cortex.vector.*) | Layer 4 (cortex.memory.*) |
|---|---|---|
remember() | N/A | Unique - stores in both layers |
get() | Vector only | Can include ACID (includeConversation) |
search() | Vector only | Can enrich with ACID (enrichConversation) |
delete() | Vector only | Same (preserves ACID) |
forget() | N/A | Unique - deletes from both layers |
store() | Manual conversationRef | Smart - detects layer from source.type |
update() | Direct | Delegates to Layer 2 |
updateMany() | Direct (filter, updates) | Delegates to Layer 2 |
deleteMany() | Direct (filter) | Delegates to Layer 2 |
count() | Direct (filter) | Delegates to Layer 2 |
list() | Direct (filter) | Delegates to Layer 2 |
export() | Direct (options) | Delegates to Layer 2 |
archive() | Single memory | Delegates to Layer 2 |
restoreFromArchive() | Restore archived memory | Delegates to Layer 2 |
archive() | Direct | Delegates to Layer 2 |
| Version ops | Direct | Delegates to Layer 2 |
Layer 4 Unique Operations:
remember()- Dual-layer storageforget()- Dual-layer deletionget()withincludeConversation- Cross-layer retrievalsearch()withenrichConversation- Cross-layer search
Layer 4 Delegations:
- Most operations are thin wrappers around
cortex.vector.* - Convenience for not having to remember namespaces
- Use
cortex.vector.*directly if you prefer explicit control
Core Operations (Layer 4: cortex.memory.*)
Layer 4 operations are convenience wrappers that orchestrate across all layers. For direct control, use Layer 1 (cortex.conversations.*), Layer 2 (cortex.vector.*), and Layer 3 (cortex.facts.*) separately.
remember()
RECOMMENDED HELPER - Full orchestration across all memory layers.
Enhanced in v0.17.0: Full multi-layer orchestration with auto-registration of memory spaces, users, and agents. Use skipLayers for explicit opt-out.
Signature:
cortex.memory.remember(
params: RememberParams,
options?: RememberOptions
): Promise<RememberResult>
Orchestration Flow:
When calling remember(), the following layers are orchestrated by default:
Cannot be skipped — memorySpaceId defaults to 'default' with warning; userId OR agentId required
Cannot be skipped — Auto-register/upsert memorySpace
skip: 'users' / 'agents' — Auto-create user profile or register agent
skip: 'conversations' — Add messages to ACID store (default: ON)
skip: 'vector' — Create searchable memory with embeddings (default: ON)
skip: 'facts' — Auto-extract if LLM configured (default: ON if LLM)
skip: 'graph' — Sync entities if adapter configured (default: ON if graph)
Use skipLayers: ['facts', 'graph'] to disable specific steps. Steps 1-2 cannot be skipped.
Parameters:
// Layers that can be explicitly skipped
type SkippableLayer =
| "users" // Don't auto-create user profile
| "agents" // Don't auto-register agent
| "conversations" // Don't store in ACID conversation layer
| "vector" // Don't store in vector memory layer
| "facts" // Don't auto-extract facts
| "graph"; // Don't sync to graph database
interface RememberParams {
// Memory Space (defaults to 'default' with warning if not provided)
memorySpaceId?: string;
// Conversation
conversationId: string; // ACID conversation (auto-created if needed)
userMessage: string;
agentResponse: string;
// Owner Attribution (at least one required)
userId?: string; // For user-owned memories
agentId?: string; // For agent-owned memories
userName?: string; // Required when userId is provided
// Hive Mode (optional)
participantId?: string; // Tracks WHO stored the memory (distinct from ownership)
// Explicit opt-out
skipLayers?: SkippableLayer[];
// Optional extraction
extractContent?: (
userMessage: string,
agentResponse: string,
) => Promise<string | null>;
// Optional embedding override (v0.30.0+: auto-generated when embedding config set)
generateEmbedding?: (content: string) => Promise<number[] | null>;
// Optional fact extraction (overrides LLM config)
extractFacts?: (
userMessage: string,
agentResponse: string,
) => Promise<Array<{
fact: string;
factType:
| "preference"
| "identity"
| "knowledge"
| "relationship"
| "event"
| "observation"
| "custom";
subject?: string;
predicate?: string;
object?: string;
confidence: number;
tags?: string[];
}> | null>;
// Cloud Mode options
autoEmbed?: boolean; // Cloud Mode: auto-generate embeddings
autoSummarize?: boolean; // Cloud Mode: auto-summarize content
// Metadata
importance?: number; // Auto-detect if not provided
tags?: string[]; // Auto-extract if not provided
}
interface RememberOptions {
// NEW in v0.24.0: Belief Revision control
beliefRevision?: boolean; // Enable/disable belief revision
// - undefined/true: Use belief revision if LLM is configured (batteries-included default)
// - false: Force deduplication-only mode (skip belief revision)
}
<Callout type="info">
**v0.29.0+**: Graph sync is now automatic when `CORTEX_GRAPH_SYNC=true` is set in your environment. The `syncToGraph` option has been removed from all APIs.
</Callout>
Automatic Fact Revision in remember() (v0.24.0+)
New in v0.24.0: When belief revision is configured, extracted facts automatically go through the revision pipeline.
When you call remember() with fact extraction enabled and belief revision configured, the SDK automatically:
- Extracts facts from the conversation using your
extractFactscallback - Runs each fact through the belief revision pipeline to check for conflicts
- Takes appropriate action (CREATE, UPDATE, SUPERSEDE, or skip) for each fact
- Logs all changes to the fact history for audit trails
How it works:
Calls extractFacts()
Check conflicts for each fact
ADD • SUPERSEDE • UPDATE • NONE
Each extracted fact goes through the pipeline independently:
- ADD — New fact with no conflicts
- SUPERSEDE — Replaces older conflicting fact
- UPDATE — Merges with existing fact
- NONE — Duplicate skipped
Example with belief revision:
// Configure Cortex with belief revision
const cortex = new Cortex({
url: process.env.CONVEX_URL!,
llm: openaiClient, // Required for LLM resolution
beliefRevision: {
slotMatching: { enabled: true },
llmResolution: { enabled: true },
},
});
// Now remember() automatically uses belief revision for facts
const result = await cortex.memory.remember({
memorySpaceId: "user-123-space",
userId: "user-123",
userName: "Alex",
conversationId: "conv-456",
userMessage: "Actually, my favorite color is now purple",
agentResponse: "I'll update that - you now prefer purple!",
extractFacts: async (user, agent) => [
{
fact: "User prefers purple",
factType: "preference",
subject: "user-123",
predicate: "favorite color",
object: "purple",
confidence: 95,
},
],
});
// result.facts includes the revision outcome
// Old "User likes blue" fact is automatically superseded
Disabling belief revision:
// Disable for a single remember() call (use deduplication-only mode)
await cortex.memory.remember(
{
memorySpaceId: "user-123-space",
// ...other params
},
{
beliefRevision: false, // Disable for this call, use deduplication only
}
);
// Python SDK equivalent:
# await cortex.memory.remember(
# RememberParams(...),
# RememberOptions(belief_revision=False)
# )
Fine-grained control over individual pipeline stages (slot matching, LLM resolution) is available through the cortex.facts.revise() API directly. The remember() integration uses a batteries-included approach - full pipeline when enabled, deduplication-only when disabled.
Return value changes with belief revision:
interface RememberResult {
conversation: { /* ... */ };
memories: MemoryEntry[];
facts: FactRecord[]; // The facts that were actually stored
// NEW in v0.24.0: Revision details (when belief revision is enabled)
factRevisions?: Array<{
action: "ADD" | "UPDATE" | "SUPERSEDE" | "NONE";
fact: FactRecord;
superseded?: FactRecord[]; // Facts that were superseded (for SUPERSEDE action)
reason?: string; // LLM's reasoning for the decision
}>;
}
// Python SDK equivalent:
# @dataclass
# class FactRevisionAction:
# action: Literal["ADD", "UPDATE", "SUPERSEDE", "NONE"]
# fact: FactRecord
# superseded: Optional[List[FactRecord]] = None
# reason: Optional[str] = None
Returns:
interface RememberResult {
conversation: {
messageIds: string[]; // IDs stored in ACID Layer 1
conversationId: string; // ACID conversation ID
};
memories: MemoryEntry[]; // Created in Vector Layer 2 (with conversationRef)
facts: FactRecord[]; // Extracted facts (Layer 3)
}
Examples:
// Full orchestration (default) - user-owned memory
const result = await cortex.memory.remember({
memorySpaceId: "user-123-space",
userId: "user-123",
userName: "Alex",
conversationId: "conv-456",
userMessage: "Call me Alex",
agentResponse: "I'll remember that, Alex!",
});
// → memorySpace registered (if needed)
// → user profile created (if needed)
// → conversation + vector stored
// → facts extracted (if LLM configured)
// → graph synced (if adapter configured)
// Agent-owned memory (no user involved)
await cortex.memory.remember({
memorySpaceId: "system-space",
agentId: "cleanup-agent",
conversationId: "conv-789",
userMessage: "System cleanup initiated",
agentResponse: "Cleanup complete",
skipLayers: ["users"], // No user to create
});
// Lightweight mode - skip facts and graph
await cortex.memory.remember({
memorySpaceId: "quick-space",
agentId: "quick-bot",
conversationId: "conv-101",
userMessage: "Quick question",
agentResponse: "Quick answer",
skipLayers: ["facts", "graph"], // Fast path
});
// With custom fact extraction
await cortex.memory.remember({
memorySpaceId: "user-123-space",
userId: "user-123",
userName: "Alex",
conversationId: "conv-456",
userMessage: "My favorite color is blue",
agentResponse: "I'll remember that blue is your favorite!",
extractFacts: async (user, agent) => [
{
fact: "User prefers blue color",
factType: "preference",
subject: "user-123",
predicate: "prefers_color",
object: "blue",
confidence: 95,
},
],
});
Validation Errors:
// Missing owner attribution
CortexError(
"OWNER_REQUIRED",
"Either userId or agentId must be provided for memory ownership",
);
// Missing userName when userId is provided
CortexError(
"MISSING_REQUIRED_FIELD",
"userName is required when userId is provided",
);
Why use remember():
- Full multi-layer orchestration in one call
- Auto-registers memory spaces, users, and agents
- Automatic conversationRef linking
- Auto-extracts facts if LLM configured
- Auto-syncs to graph if configured
- Explicit opt-out via
skipLayers - Ensures consistency across all layers
- This is the main way to store conversation memories
See Also:
- Helper Functions
- Conversation Operations - Managing ACID conversations
- Memory Space Operations - Managing memory spaces
- User Operations - User profile management
- Agent Management - Agent registration
recall()
NEW in v0.24.0 - Unified orchestrated retrieval across all memory layers.
Design Philosophy: recall() is the retrieval counterpart to remember(). It provides total and complete orchestrated context retrieval by default - batteries included.
v0.30.0: Facts are now searched using semantic vector matching. When embedding config is set on the SDK, embeddings are auto-generated from the query. No manual embedding code needed!
Signature:
cortex.memory.recall(
params: RecallParams
): Promise<RecallResult>
What it does:
- Searches vector memories (Layer 2) - Semantic search with optional embedding
- Searches facts semantically (Layer 3, v0.30.0+) - When embedding provided, uses vector similarity; falls back to text search otherwise
- Queries graph relationships (Layer 4) - Discover related context via entity connections
- Merges, deduplicates, and ranks results from all sources
- Returns unified context ready for LLM injection
Batteries Included Defaults:
| Feature | Default | Description |
|---|---|---|
| Vector Search | Enabled | Searches Layer 2 vector memories |
| Facts Semantic Search | Enabled (v0.30.0+) | Semantic search when embedding provided |
| Facts Text Search | Enabled | Fallback when no embedding provided |
| Graph Expansion | Enabled (if configured) | Discovers related context via graph |
| LLM Context Formatting | Enabled | Generates ready-to-inject context string |
| Conversation Enrichment | Enabled | Includes ACID conversation data |
| Deduplication | Enabled | Removes duplicates across sources |
| Ranking | Enabled | Multi-signal scoring algorithm |
Parameters:
interface RecallParams {
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
// REQUIRED - Just these two for basic usage
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
memorySpaceId: string; // Memory space to search
query: string; // Natural language query
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
// OPTIONAL - All have sensible defaults
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
// Search enhancement
embedding?: number[]; // Pre-computed embedding (recommended)
userId?: string; // Filter by user (common in H2A)
// Source selection - ALL ENABLED BY DEFAULT
// Only specify to DISABLE sources
sources?: {
vector?: boolean; // Default: true
facts?: boolean; // Default: true
graph?: boolean; // Default: true (if graph configured)
};
// Graph expansion - ENABLED BY DEFAULT
graphExpansion?: {
enabled?: boolean; // Default: true (if graph configured)
maxDepth?: number; // Default: 2
relationshipTypes?: string[]; // Default: all types
expandFromFacts?: boolean; // Default: true
expandFromMemories?: boolean; // Default: true
};
// Filtering
minImportance?: number; // Minimum importance (0-100)
minConfidence?: number; // Minimum fact confidence (0-100)
tags?: string[]; // Filter by tags
createdAfter?: Date; // Only include after this date
createdBefore?: Date; // Only include before this date
// Result options
limit?: number; // Default: 20
includeConversation?: boolean; // Default: true
formatForLLM?: boolean; // Default: true
}
Returns:
interface RecallResult {
// Unified results (merged, deduped, ranked)
items: RecallItem[];
// Source breakdown
sources: {
vector: { count: number; items: MemoryEntry[] };
facts: { count: number; items: FactRecord[] };
graph: { count: number; expandedEntities: string[] };
};
// LLM-ready context (if formatForLLM: true)
context?: string;
// Metadata
totalResults: number;
queryTimeMs: number;
graphExpansionApplied: boolean;
}
interface RecallItem {
type: "memory" | "fact";
id: string;
content: string;
score: number; // Combined ranking score (0-1)
source: "vector" | "facts" | "graph-expanded";
memory?: MemoryEntry;
fact?: FactRecord;
graphContext?: {
connectedEntities: string[];
relationshipPath?: string;
};
conversation?: Conversation;
sourceMessages?: Message[];
}
Example 1: Minimal Usage (Full Orchestration)
// Just two parameters - full orchestration by default
const result = await cortex.memory.recall({
memorySpaceId: "user-123-space",
query: "user preferences",
});
// Inject context directly into LLM prompt
const response = await llm.chat({
messages: [
{
role: "system",
content: `You are a helpful assistant.\n\n${result.context}`,
},
{ role: "user", content: userMessage },
],
});
Example 2: With Semantic Search (Batteries-Included)
// v0.30.0+: Embeddings auto-generated when embedding config is set at SDK init
const result = await cortex.memory.recall({
memorySpaceId: "user-123-space",
query: "user preferences",
// No manual embedding needed! Auto-generated from query.
userId: "user-123", // Scope to user
});
// result.context is LLM-ready
// result.items has full details if needed
// result.sources shows what came from where
v0.30.0+: When embedding is configured at SDK init (or CORTEX_EMBEDDING=true), embeddings are automatically generated from the query. Manual embedding is still supported for override.
Example 3: Multi-Agent Context Sharing (A2A)
// Agent retrieving shared context from Hive space
const sharedContext = await cortex.memory.recall({
memorySpaceId: "team-hive-space",
query: "project requirements and deadlines",
// Embedding auto-generated!
});
// Send context to collaborating agent
await cortex.a2a.send({
from: "planning-agent",
to: "execution-agent",
message: `Here's the context: ${sharedContext.context}`,
});
Example 4: Deep Graph Exploration
// When you need to discover relational connections
const result = await cortex.memory.recall({
memorySpaceId: "knowledge-base",
query: "who does Alice work with",
// Embedding auto-generated!
graphExpansion: {
maxDepth: 3, // Go deeper than default
relationshipTypes: ["WORKS_AT", "KNOWS", "COLLABORATES_WITH"],
},
limit: 50, // More results for comprehensive context
});
// See what the graph discovered
console.log("Discovered entities:", result.sources.graph.expandedEntities);
// ['Acme Corp', 'Bob', 'Engineering Team', 'Project Alpha']
Example 5: Lightweight Mode (Opt-Out)
// When you need speed over completeness
const result = await cortex.memory.recall({
memorySpaceId: "user-space",
query: "quick lookup",
sources: {
vector: true,
facts: false, // Skip facts
graph: false, // Skip graph
},
formatForLLM: false, // Just get raw items
});
Symmetric API Design:
remember() and recall() form a symmetric pair — the two primary orchestration APIs:
| Aspect | remember() | recall() |
|---|---|---|
| Purpose | Store with full orchestration | Retrieve with full orchestration |
| Default | All layers enabled | All sources enabled |
| Opt-out | skipLayers: ['facts', 'graph'] | sources: { facts: false } |
| Graph | Auto-syncs entities | Auto-expands via relationships |
| Output | RememberResult | RecallResult with LLM context |
Create searchable memory with embeddings
Auto-extract facts if LLM configured
Sync entities to Neo4j/Memgraph
Semantic similarity search
Retrieve relevant extracted facts
Multi-hop relationship expansion
Combine and rank results for context
Ranking Algorithm:
Results are ranked using a multi-signal scoring algorithm:
score =
semanticScore * 0.35 + // Vector similarity
confidenceScore * 0.2 + // Fact confidence (0-100 → 0-1)
importanceScore * 0.15 + // Importance (0-100 → 0-1)
recencyScore * 0.15 + // Time decay
graphConnectivityScore * 0.15; // Graph centrality
// Boosts
if (connectedEntities.length > 3) score *= 1.2; // 20% boost
if (memory.messageRole === "user") score *= 1.1; // 10% boost
LLM Context Format:
When formatForLLM: true (default), the context string is structured as:
## Relevant Context
### Known Facts
- User prefers dark mode (confidence: 95%)
- User works at Acme Corp (confidence: 88%)
### Conversation History
[user]: I prefer dark mode
[agent]: I'll remember that!
When to Use recall() vs search():
| Use Case | Use recall() | Use search() |
|---|---|---|
| AI Chatbot context | Yes | No |
| Multi-agent coordination | Yes | No |
| LLM prompt injection | Yes | No |
| Simple vector lookup | No | Yes |
| Direct Layer 2 access | No | Yes |
| Custom result processing | No | Yes |
Errors:
MemoryValidationError('MISSING_REQUIRED_FIELD')- Missing memorySpaceId or queryMemoryValidationError('INVALID_EMBEDDING')- Invalid embedding arrayMemoryValidationError('INVALID_DATE_RANGE')- createdAfter > createdBeforeMemoryValidationError('INVALID_GRAPH_DEPTH')- maxDepth not in 1-5 range
See Also:
- remember() - The storage counterpart
- Semantic Search Guide
- Graph Operations
- Facts Operations
rememberStream()
ENHANCED in v0.15.0 - Advanced streaming orchestration with progressive storage, real-time fact extraction, comprehensive metrics, and error recovery.
Signature:
cortex.memory.rememberStream(
params: RememberStreamParams,
options?: StreamingOptions
): Promise<EnhancedRememberStreamResult>
What it does:
- Processes stream progressively - Not just buffering, but real processing during streaming
- Progressive storage - Optionally stores partial memories as content arrives
- Real-time fact extraction - Extract facts incrementally during streaming
- Streaming hooks - Monitor progress with
onChunk,onProgress,onError,onCompletecallbacks - Comprehensive metrics - Track latency, throughput, token usage, and costs
- Error recovery - Resume interrupted streams with resume tokens
- Adaptive processing - Auto-optimize based on stream characteristics
- Graph sync - Progressively sync to graph databases (Neo4j/Memgraph)
- Complete feature parity - All
remember()features work in streaming mode
Parameters:
interface RememberStreamParams {
// Required
memorySpaceId: string;
conversationId: string;
userMessage: string;
responseStream: ReadableStream<string> | AsyncIterable<string>;
userId: string;
userName: string;
// Optional - Hive Mode
participantId?: string;
// Optional - Content processing
extractContent?: (
userMsg: string,
agentResp: string,
) => Promise<string | null>;
// Optional - Embeddings override (v0.30.0+: auto-generated when embedding config set)
generateEmbedding?: (content: string) => Promise<number[] | null>;
// Optional - Fact extraction
extractFacts?: (
userMsg: string,
agentResp: string,
) => Promise<FactData[] | null>;
// Optional - Cloud Mode
autoEmbed?: boolean;
autoSummarize?: boolean;
// Optional - Metadata
importance?: number;
tags?: string[];
}
interface StreamingOptions {
// Graph sync (v0.29.0+): Automatic when CORTEX_GRAPH_SYNC=true
progressiveGraphSync?: boolean; // Sync during streaming
graphSyncInterval?: number; // How often to sync (ms)
// Progressive storage
storePartialResponse?: boolean; // Store in-progress memories
partialResponseInterval?: number; // Update interval (ms)
// Progressive fact extraction
progressiveFactExtraction?: boolean;
factExtractionThreshold?: number; // Extract every N chars
// Streaming hooks
hooks?: {
onChunk?: (event: ChunkEvent) => void | Promise<void>;
onProgress?: (event: ProgressEvent) => void | Promise<void>;
onError?: (error: StreamError) => void | Promise<void>;
onComplete?: (event: StreamCompleteEvent) => void | Promise<void>;
};
// Error handling
partialFailureHandling?:
| "store-partial"
| "rollback"
| "retry"
| "best-effort";
maxRetries?: number;
generateResumeToken?: boolean;
streamTimeout?: number;
// Advanced
maxResponseLength?: number;
enableAdaptiveProcessing?: boolean;
}
Returns:
interface EnhancedRememberStreamResult {
// Standard remember() result
conversation: {
messageIds: string[];
conversationId: string;
};
memories: MemoryEntry[];
facts: FactRecord[];
fullResponse: string;
// Stream metrics
streamMetrics: {
totalChunks: number;
streamDurationMs: number;
averageChunkSize: number;
firstChunkLatency: number;
totalBytesProcessed: number;
chunksPerSecond: number;
estimatedTokens: number;
estimatedCost?: number;
};
// Progressive processing results (if enabled)
progressiveProcessing?: {
factsExtractedDuringStream: ProgressiveFact[];
partialStorageHistory: PartialUpdate[];
graphSyncEvents?: GraphSyncEvent[];
};
// Performance insights
performance?: {
bottlenecks: string[];
recommendations: string[];
costEstimate?: number;
};
// Error/recovery info
errors?: StreamError[];
recovered?: boolean;
resumeToken?: string;
}
Example 1: Vercel AI SDK
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
const response = await streamText({
model: openai("gpt-5-nano"),
messages: [{ role: "user", content: "What is AI?" }],
});
const result = await cortex.memory.rememberStream({
memorySpaceId: "ai-tutor",
conversationId: "lesson-1",
userMessage: "What is AI?",
responseStream: response.textStream, // ReadableStream
userId: "student-123",
userName: "Alice",
});
console.log("Full response:", result.fullResponse);
console.log("Memories stored:", result.memories.length); // 2 (user + agent)
console.log("Stream metrics:", result.streamMetrics);
// NEW: Access streaming metrics
// {
// totalChunks: 5,
// streamDurationMs: 432,
// firstChunkLatency: 123,
// estimatedTokens: 250,
// estimatedCost: 0.015
// }
Example 2: With Progressive Features
const result = await cortex.memory.rememberStream(
{
memorySpaceId: "ai-tutor",
conversationId: "lesson-2",
userMessage: "Explain quantum computing in detail",
responseStream: llmStream,
userId: "student-123",
userName: "Alice",
extractFacts: extractFactsCallback,
},
{
// Progressive storage - save partial content during streaming
storePartialResponse: true,
partialResponseInterval: 3000, // Update every 3 seconds
// Progressive fact extraction
progressiveFactExtraction: true,
factExtractionThreshold: 500, // Extract every 500 chars
// Streaming hooks for real-time updates
hooks: {
onChunk: (event) => {
console.log(`Chunk ${event.chunkNumber}: ${event.chunk}`);
websocket.send({ type: "chunk", data: event.chunk });
},
onProgress: (event) => {
console.log(`Progress: ${event.bytesProcessed} bytes`);
updateProgressBar(event.bytesProcessed);
},
onComplete: (event) => {
console.log(
`Complete! ${event.totalChunks} chunks, ${event.durationMs}ms`,
);
},
},
// Error recovery
partialFailureHandling: "store-partial",
generateResumeToken: true,
// Graph sync
progressiveGraphSync: true,
graphSyncInterval: 5000,
},
);
// Access enhanced results
console.log("Stream metrics:", result.streamMetrics);
console.log(
"Facts extracted during stream:",
result.progressiveProcessing?.factsExtractedDuringStream,
);
console.log(
"Performance recommendations:",
result.performance?.recommendations,
);
Example 3: OpenAI SDK (AsyncIterable)
import OpenAI from "openai";
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model: "gpt-5-nano",
messages: [{ role: "user", content: "Hello!" }],
stream: true,
});
const result = await cortex.memory.rememberStream({
memorySpaceId: "chat-bot",
conversationId: "conv-789",
userMessage: "Hello!",
responseStream: stream, // AsyncIterable
userId: "user-456",
userName: "Bob",
});
Example 4: With Embeddings and Facts (Batteries-Included)
// Configure Cortex with embedding + LLM at init (v0.30.0+)
const cortex = new Cortex({
convexUrl: process.env.CONVEX_URL!,
embedding: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY },
llm: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY },
});
// Embeddings + facts are now automatic!
const result = await cortex.memory.rememberStream({
memorySpaceId: "smart-bot",
conversationId: "conv-999",
userMessage: "My favorite color is blue",
responseStream: stream,
userId: "user-789",
userName: "Charlie",
// No generateEmbedding needed - auto-generated!
// No extractFacts needed - LLM extracts automatically!
});
console.log("Response:", result.fullResponse);
console.log("Facts:", result.facts); // Auto-extracted facts with embeddings
v0.30.0+: When embedding is configured at SDK init, embeddings are automatically generated for all remember() and rememberStream() calls. The generateEmbedding callback is still supported for manual override.
Example 5: Error Recovery with Resume
try {
const result = await cortex.memory.rememberStream(params, {
partialFailureHandling: "store-partial",
generateResumeToken: true,
streamTimeout: 30000, // 30 second timeout
});
} catch (error) {
if (error instanceof ResumableStreamError) {
// Stream was interrupted but partial data was saved
console.log("Stream interrupted. Resume token:", error.resumeToken);
// Later, resume the stream
const resumed = await cortex.memory.rememberStream({
...params,
resumeFrom: await validateResumeToken(error.resumeToken),
});
}
}
Example 6: Edge Runtime (Vercel Edge Functions)
// app/api/chat/route.ts
export const runtime = "edge";
export async function POST(req: Request) {
const { message } = await req.json();
const response = await streamText({
model: openai("gpt-5-nano"),
messages: [{ role: "user", content: message }],
});
// Store in background (works in edge runtime!)
cortex.memory
.rememberStream({
memorySpaceId: "edge-chat",
conversationId: "conv-" + Date.now(),
userMessage: message,
responseStream: response.textStream,
userId: req.headers.get("x-user-id") || "anonymous",
userName: "User",
})
.catch((error) => {
console.error("Memory failed:", error);
});
// Return stream to client
return response.toAIStreamResponse();
}
Key Features (v0.15.0+):
- Progressive Storage — Store partial memories during streaming (resumable)
- Streaming Hooks — Real-time callbacks for monitoring and UI updates
- Comprehensive Metrics — Track latency, throughput, tokens, costs
- Progressive Facts — Extract facts incrementally with deduplication
- Error Recovery — Resume interrupted streams with checkpoints
- Graph Sync — Progressively update Neo4j/Memgraph during streaming
- Adaptive Processing — Auto-optimize based on stream characteristics
- Complete Parity — All
remember()features (embeddings, facts, graph sync) - Type Safe — Full TypeScript support with comprehensive types
- Streaming LLM responses (OpenAI, Anthropic, Vercel AI SDK, etc.)
- Long-running agent responses (> 5 seconds)
- Real-time chat applications with live updates
- Edge runtime functions (Vercel, Cloudflare Workers)
- When you need resumability (long streams that might fail)
- When monitoring performance is critical
- When you want real-time fact extraction
- Already have complete response — use
remember()instead (simpler and faster) - Very short responses (< 50 chars) where overhead isn't worth it
Error Handling:
Error('Failed to consume response stream')- Stream reading failedError('produced no content')- Stream was empty or whitespace onlyResumableStreamError- Stream interrupted, includes resume tokenError('Stream timeout')- Stream exceeded timeout limitError('Stream exceeded max length')- Stream too long- Standard
remember()errors for final storage
Performance:
- First Chunk Latency: 6-10ms (excellent)
- Overhead vs Buffering: < 5% (minimal impact)
- Memory Usage: O(1) for unbounded streams (with rolling window)
- Throughput: Processes immediately, no accumulation delay
- Graph Sync Latency: < 50ms per update (both Neo4j and Memgraph)
See Also:
- Streaming Support Guide - Complete streaming documentation
- Conversation History - Streaming in context
- remember() - Non-streaming variant
search()
Layer 4 Operation - Search Vector index with optional ACID enrichment.
Signature:
cortex.memory.search(
memorySpaceId: string,
query: string,
options?: SearchOptions
): Promise<MemoryEntry[] | EnrichedMemory[]>
Parameters:
interface SearchOptions {
// Layer enrichment
enrichConversation?: boolean; // Fetch ACID conversations (default: false)
// Semantic search
embedding?: number[]; // Query vector (enables semantic search)
// Filtering (universal filters)
userId?: string;
tags?: string[];
tagMatch?: "any" | "all"; // Default: 'any'
importance?: number | RangeQuery; // Number or { $gte, $lte, $eq }
minImportance?: number; // Shorthand for { $gte: n }
// Date filtering
createdBefore?: Date;
createdAfter?: Date;
updatedBefore?: Date;
updatedAfter?: Date;
lastAccessedBefore?: Date;
lastAccessedAfter?: Date;
// Access filtering
accessCount?: number | RangeQuery;
version?: number | RangeQuery;
// Source filtering
"source.type"?: "conversation" | "system" | "tool" | "a2a";
// Metadata filtering
metadata?: Record<string, any>;
// Result options
limit?: number; // Default: 20
offset?: number; // Default: 0
minScore?: number; // Similarity threshold (0-1)
sortBy?: "score" | "createdAt" | "updatedAt" | "accessCount" | "importance";
sortOrder?: "asc" | "desc"; // Default: 'desc'
// Strategy
strategy?: "auto" | "semantic" | "keyword" | "recent";
boostImportance?: boolean; // Boost by importance score
boostRecent?: boolean; // Boost recent memories
boostPopular?: boolean; // Boost frequently accessed
// Enriched fact boosting (v0.15.0+)
queryCategory?: string; // Category to boost (e.g., "addressing_preference")
// Facts with matching factCategory get +30% score boost
}
interface RangeQuery {
$gte?: number;
$lte?: number;
$eq?: number;
$ne?: number;
$gt?: number;
$lt?: number;
}
Returns:
interface SearchResult extends MemoryEntry {
score: number; // Similarity score (0-1)
strategy: "semantic" | "keyword" | "recent";
highlights?: string[]; // Matched snippets
explanation?: string; // Cloud Mode: why matched
}
Example 1: Default (Vector only - Batteries-Included)
// v0.30.0+: Embeddings auto-generated when embedding config is set
const memories = await cortex.memory.search(
"user-123-personal",
"user preferences",
{
// No manual embedding needed! Auto-generated from query.
userId: "user-123",
tags: ["preferences"],
minImportance: 50,
limit: 10,
},
);
memories.forEach((m) => {
console.log(`${m.content} (score: ${m.score})`);
console.log(` conversationRef: ${m.conversationRef?.conversationId}`); // Reference only
});
Example 2: With ACID enrichment
const enriched = await cortex.memory.search(
"user-123-personal",
"user preferences",
{
// Embedding auto-generated!
userId: "user-123",
enrichConversation: true, // Fetch ACID conversations too
},
);
enriched.forEach((m) => {
// Vector data
console.log("Vector content:", m.memory.content);
console.log("Score:", m.score);
// ACID data (if conversationRef exists)
if (m.conversation) {
console.log(
"Full conversation:",
m.conversation.messages.length,
"messages",
);
console.log("Source message:", m.sourceMessages[0].text);
}
});
Comparison:
// Layer 2 directly (Vector only)
const vectorResults = await cortex.vector.search(
"user-123-personal",
query,
options,
);
// Layer 4 default (same as Layer 2, but can enrich)
const results = await cortex.memory.search("user-123-personal", query, options);
// Layer 4 enriched (Vector + ACID)
const enriched = await cortex.memory.search("user-123-personal", query, {
...options,
enrichConversation: true,
});
Errors:
CortexError('INVALID_AGENT_ID')- Agent ID is invalidCortexError('INVALID_EMBEDDING_DIMENSION')- Embedding dimension mismatchCortexError('CONVEX_ERROR')- Database error
See Also:
get()
Layer 4 Operation - Get Vector memory with optional ACID conversation retrieval.
Signature:
cortex.memory.get(
memorySpaceId: string,
memoryId: string,
options?: GetOptions
): Promise<MemoryEntry | EnrichedMemory | null>
Parameters:
interface GetOptions {
includeConversation?: boolean; // Fetch ACID conversation too (default: false)
}
Returns:
// Default (includeConversation: false)
MemoryEntry | null;
// With includeConversation: true
interface EnrichedMemory {
memory: MemoryEntry; // Vector Layer 2 data
conversation?: Conversation; // ACID Layer 1 data (if conversationRef exists)
sourceMessages?: Message[]; // Specific messages that informed this memory
}
Side Effects:
- Increments
accessCount - Updates
lastAccessedtimestamp
Example 1: Default (Vector only)
const memory = await cortex.memory.get("user-123-personal", "mem_abc123");
if (memory) {
console.log(memory.content); // Vector content
console.log(`Version: ${memory.version}`);
console.log(`conversationRef:`, memory.conversationRef); // Reference only
}
Example 2: With ACID conversation
const enriched = await cortex.memory.get("user-123-personal", "mem_abc123", {
includeConversation: true,
});
if (enriched) {
// Layer 2 (Vector)
console.log("Vector content:", enriched.memory.content);
console.log("Version:", enriched.memory.version);
// Layer 1 (ACID) - automatically fetched
if (enriched.conversation) {
console.log("Conversation ID:", enriched.conversation.conversationId);
console.log("Total messages:", enriched.conversation.messages.length);
console.log("Source message:", enriched.sourceMessages[0].text);
}
}
Comparison:
// Layer 2 directly (fast, Vector only)
const vectorMem = await cortex.vector.get("user-123-personal", "mem_abc123");
// Layer 4 default (same as Layer 2)
const mem = await cortex.memory.get("user-123-personal", "mem_abc123");
// Layer 4 enriched (Vector + ACID)
const enriched = await cortex.memory.get("user-123-personal", "mem_abc123", {
includeConversation: true,
});
Errors:
CortexError('INVALID_AGENT_ID')- Agent ID is invalidCortexError('MEMORY_NOT_FOUND')- Memory doesn't existCortexError('PERMISSION_DENIED')- Agent doesn't own this memory
See Also:
store()
Layer 4 Operation - Stores in Vector with optional fact extraction.
Store a new memory for an agent. Use this for non-conversation memories (system, tool). For conversation memories, prefer remember().
Signature:
cortex.memory.store(
memorySpaceId: string,
input: StoreMemoryInput
): Promise<StoreMemoryResult>
Parameters:
interface StoreMemoryInput {
// Content (required)
content: string; // The information to remember
contentType: "raw" | "summarized"; // Type of content
// Embedding (optional but preferred)
embedding?: number[]; // Vector for semantic search
// Context
userId?: string; // User this relates to
// Source (required)
source: {
type: "conversation" | "system" | "tool" | "a2a";
userId?: string;
userName?: string;
timestamp: Date;
};
// Layer 1 References (optional - link to ACID stores)
// ONE of these may be present (mutually exclusive)
conversationRef?: {
// Layer 1a: Private conversations
conversationId: string; // Which conversation
messageIds: string[]; // Specific message(s)
};
immutableRef?: {
// Layer 1b: Shared immutable data
type: string; // Entity type
id: string; // Logical ID
version?: number; // Specific version (optional)
};
mutableRef?: {
// Layer 1c: Shared mutable data (snapshot)
namespace: string;
key: string;
snapshotValue: any; // Value at indexing time
snapshotAt: Date;
};
// Metadata (required)
metadata: {
importance: number; // 0-100
tags: string[]; // Categorization
[key: string]: any; // Custom fields
};
}
Returns:
interface StoreMemoryResult {
memory: MemoryEntry; // The stored memory entry
facts: FactRecord[]; // Extracted facts (if fact extraction is configured)
}
interface MemoryEntry {
_id: string; // Convex internal ID
memoryId: string; // Cortex memory ID (use this for API calls)
memorySpaceId: string;
tenantId?: string; // Multi-tenancy: SaaS platform isolation
participantId?: string; // Hive Mode: who stored this memory
userId?: string; // For user-owned memories
agentId?: string; // For agent-owned memories
content: string;
contentType: "raw" | "summarized" | "fact";
embedding?: number[];
sourceType: "conversation" | "system" | "tool" | "a2a" | "fact-extraction";
sourceUserId?: string; // User who triggered the source event
sourceUserName?: string; // Display name of source user
sourceTimestamp: number; // When the source event occurred (Unix ms)
messageRole?: "user" | "agent" | "system"; // For semantic search weighting
conversationRef?: ConversationRef;
immutableRef?: ImmutableRef; // Link to Layer 1b immutable store
mutableRef?: MutableRef; // Link to Layer 1c mutable store
factsRef?: FactsRef; // Link to Layer 3 fact
importance: number; // 0-100 (direct field, not nested)
tags: string[]; // (direct field, not nested)
version: number; // Always 1 for new
previousVersions: MemoryVersion[]; // Empty for new
createdAt: number; // Unix timestamp in milliseconds
updatedAt: number; // Unix timestamp in milliseconds
lastAccessed?: number; // Unix timestamp in milliseconds
accessCount: number; // Always 0 for new
// Enrichment fields (v0.15.0+) - for bullet-proof retrieval
enrichedContent?: string; // Concatenated searchable content for embedding
factCategory?: string; // Category for filtering (e.g., "addressing_preference")
}
All timestamps (createdAt, updatedAt, lastAccessed, sourceTimestamp) are Unix timestamps in milliseconds (not JavaScript Date objects).
Example 1: Conversation Memory (conversationRef required)
// FIRST: Store in ACID (you must do this first for conversations)
const msg = await cortex.conversations.addMessage("conv-456", {
role: "user",
text: "The password is Blue",
userId: "user-123",
});
// THEN: Store in Vector (with conversationRef linking to ACID)
const memory = await cortex.vector.store("user-123-personal", {
content: "The password is Blue",
contentType: "raw",
embedding: await embed("The password is Blue"),
userId: "user-123",
source: {
type: "conversation", // ← Conversation type
userId: "user-123",
userName: "Alex Johnson",
// timestamp is optional - auto-set by backend
},
conversationRef: {
// ← REQUIRED for conversations
conversationId: "conv-456",
messageIds: [msg.id], // From ACID message
},
metadata: {
importance: 100,
tags: ["password", "security"],
},
});
// Return uses memoryId (not id) and flattened fields
console.log(memory.memoryId); // "mem-1735689600000-abc123xyz"
console.log(memory.conversationRef.conversationId); // "conv-456"
console.log(memory.importance); // 100 (flattened from input)
console.log(memory.tags); // ["password", "security"]
When storing, you provide metadata.importance and metadata.tags in the input. The returned MemoryEntry has these as top-level fields: importance and tags.
Example 2: System Memory (no conversationRef)
// No ACID storage needed - this isn't from a conversation
const result = await cortex.memory.store("user-123-personal", {
content: "Agent started successfully at 10:00 AM",
contentType: "raw",
source: {
type: "system", // ← System type
timestamp: new Date(),
},
// No conversationRef - not from a conversation
metadata: {
importance: 20,
tags: ["system", "status"],
},
});
// Access the stored memory and any extracted facts
console.log(result.memory.memoryId); // "mem-1735689600000-xyz789"
console.log(result.memory.content); // "Agent started successfully at 10:00 AM"
console.log(result.facts); // [] (no facts extracted for system memories)
Example 3: Use remember() - recommended for conversations
// Helper does both steps automatically
const result = await cortex.memory.remember({
memorySpaceId: "agent-1",
conversationId: "conv-456",
userMessage: "The password is Blue",
agentResponse: "I'll remember that!",
userId: "user-123",
userName: "Alex",
});
// Automatically:
// 1. Stored 2 messages in ACID
// 2. Created 2 vector memories with conversationRef
Errors:
CortexError('INVALID_AGENT_ID')- Agent ID is invalidCortexError('INVALID_CONTENT')- Content is empty or too largeCortexError('INVALID_IMPORTANCE')- Importance not in 0-100 rangeCortexError('CONVEX_ERROR')- Database error
See Also:
update()
Update a single memory by ID. Automatically creates new version.
Signature:
cortex.memory.update(
memorySpaceId: string,
memoryId: string,
updates: MemoryUpdate,
options?: UpdateMemoryOptions
): Promise<UpdateMemoryResult>
Parameters:
interface MemoryUpdate {
content?: string;
embedding?: number[];
importance?: number; // Direct field (0-100)
tags?: string[]; // Direct field (replaces existing tags)
}
interface UpdateMemoryOptions {
// Note: syncToGraph option removed in v0.29.0+ - sync is automatic when CORTEX_GRAPH_SYNC=true
}
Unlike the nested metadata structure shown in store() input, updates use flattened fields for importance and tags.
Returns:
interface UpdateMemoryResult {
memory: MemoryEntry; // Updated memory with incremented version
factsReextracted?: FactRecord[]; // Facts re-extracted from updated content (if configured)
}
Side Effects:
- Creates new version (v2, v3, etc.)
- Preserves previous version in
previousVersions(subject to retention) - Updates
updatedAttimestamp
Example:
// Update password memory (creates version 2)
const result = await cortex.memory.update(
"user-123-personal",
"mem-1735689600000-abc123",
{
content: "The password is Green now",
embedding: await embed("The password is Green now"),
importance: 100, // Direct field (not nested in metadata)
tags: ["password", "security", "updated"], // Replaces existing tags
},
);
// Access the updated memory
console.log(result.memory.version); // 2
console.log(result.memory.content); // "The password is Green now"
console.log(result.memory.importance); // 100
console.log(result.memory.previousVersions[0].content); // "The password is Blue"
// Check if facts were re-extracted
if (result.factsReextracted?.length) {
console.log("Re-extracted facts:", result.factsReextracted);
}
The update() method does NOT support updating conversationRef. To link to a new ACID conversation, use store() to create a new memory.
Errors:
CortexError('MEMORY_NOT_FOUND')- Memory doesn't existCortexError('PERMISSION_DENIED')- Agent doesn't own this memoryCortexError('INVALID_UPDATE')- Update data is invalid
See Also:
updateMany()
Bulk update memories matching filters.
Signature:
cortex.memory.updateMany(
memorySpaceId: string,
filters: UniversalFilters,
updates: MemoryUpdate
): Promise<UpdateManyResult>
Parameters:
memorySpaceId(string) - Memory space that contains the memoriesfilters(UniversalFilters) - Same filters as search()updates(MemoryUpdate) - Fields to update
Returns:
interface UpdateManyResult {
updated: number; // Count of updated memories
memoryIds: string[]; // IDs of updated memories
newVersions: number[]; // New version numbers
}
Example:
// Boost importance of frequently accessed memories
const result = await cortex.memory.updateMany(
"agent-1",
{
accessCount: { $gte: 10 },
},
{
metadata: {
importance: 75, // Bump to high (70-89 range)
},
},
);
console.log(`Updated ${result.updated} memories`);
// Add tag to all old memories
await cortex.memory.updateMany(
"agent-1",
{
createdBefore: new Date("2025-01-01"),
},
{
metadata: {
tags: ["legacy"], // Appends to existing tags
},
},
);
Errors:
CortexError('INVALID_FILTERS')- Filters are malformedCortexError('NO_MEMORIES_MATCHED')- No memories match filters
See Also:
delete()
Layer 4 Operation - Deletes from Vector only (preserves ACID).
Signature:
cortex.memory.delete(
memorySpaceId: string,
memoryId: string,
options?: DeleteMemoryOptions
): Promise<DeleteMemoryResult>
Parameters:
memorySpaceId(string) - Memory space that contains the memorymemoryId(string) - Memory to delete
interface DeleteMemoryOptions {
// Note: syncToGraph option removed in v0.29.0+ - graph cleanup is automatic when CORTEX_GRAPH_SYNC=true
}
Returns:
interface DeleteMemoryResult {
deleted: boolean; // True if successfully deleted
memoryId: string; // ID of deleted memory
factsDeleted: number; // Number of associated facts cascade deleted
factIds: string[]; // IDs of deleted facts
}
Side Effects:
- Deletes memory from Vector layer only
- Cascade deletes associated facts from Layer 3
- Preserves ACID conversation (if conversationRef exists)
- Use
forget()if you need to delete from both layers
Example:
const result = await cortex.memory.delete("user-123-personal", "mem_abc123");
console.log(`Deleted: ${result.deleted}`); // true
console.log(`Memory ID: ${result.memoryId}`); // 'mem_abc123'
console.log(`Facts deleted: ${result.factsDeleted}`); // e.g., 3
console.log(`Fact IDs: ${result.factIds}`); // e.g., ['fact-1', 'fact-2', 'fact-3']
// ACID conversation still accessible if needed
// Use cortex.conversations.get(conversationId) to retrieve original
Comparison:
// Layer 2 directly (Vector only, explicit)
await cortex.vector.delete("user-123-personal", "mem_abc123");
// Layer 4 (same as Layer 2, but preserves ACID)
await cortex.memory.delete("user-123-personal", "mem_abc123");
// Vector deleted, ACID preserved
// Layer 4 forget() (delete from both - see below)
await cortex.memory.forget("user-123-personal", "mem_abc123", {
deleteConversation: true,
});
// Vector AND ACID deleted
Errors:
CortexError('MEMORY_NOT_FOUND')- Memory doesn't existCortexError('PERMISSION_DENIED')- Agent doesn't own this memory
See Also:
forget()
Layer 4 Operation - Delete from both Vector and ACID (complete removal).
Signature:
cortex.memory.forget(
memorySpaceId: string,
memoryId: string,
options?: ForgetOptions
): Promise<ForgetResult>
Parameters:
interface ForgetOptions {
deleteConversation?: boolean; // Delete ACID conversation too (default: false)
deleteEntireConversation?: boolean; // Delete whole conversation vs just message (default: false)
}
Returns:
interface ForgetResult {
memoryDeleted: boolean; // Vector deletion
conversationDeleted: boolean; // ACID deletion
messagesDeleted: number; // ACID messages deleted
restorable: boolean; // False
}
Example:
// Delete memory + its source message from ACID
const result = await cortex.memory.forget("user-123-personal", "mem_abc123", {
deleteConversation: true,
});
console.log(`Memory deleted from Vector: ${result.memoryDeleted}`);
console.log(`ACID messages deleted: ${result.messagesDeleted}`);
console.log(`Restorable: ${result.restorable}`); // false - gone from both layers
// WARNING: Use carefully! This is permanent across both layers.
Warning: forget() is destructive. Use delete() to preserve ACID audit trail.
Use cases for forget():
- User requests complete data deletion (GDPR)
- Removing sensitive information completely
- Test data cleanup
See Also:
deleteMany()
Bulk delete memories matching filters.
Signature:
cortex.memory.deleteMany(
memorySpaceId: string,
filters: UniversalFilters,
options?: DeleteOptions
): Promise<DeletionResult>
Parameters:
interface DeleteManyFilter {
memorySpaceId: string;
userId?: string; // Filter by user
sourceType?: "conversation" | "system" | "tool" | "a2a"; // Filter by source
}
Planned Features (Not Yet Implemented):
dryRun- Preview what would be deleted without actually deletingrequireConfirmation- Prompt when deletion count exceeds thresholdconfirmationThreshold- Threshold for auto-confirmation (default: 10)
Returns:
interface DeleteManyResult {
deleted: number; // Count of deleted memories
memoryIds: string[]; // IDs of deleted memories
}
The Layer 4 memory.deleteMany() API includes additional return fields (restorable, affectedUsers, wouldDelete, memories) that are not present in the Layer 2 vector.deleteMany() API.
Example:
// Preview deletion
const preview = await cortex.memory.deleteMany(
"agent-1",
{
importance: { $lte: 30 },
accessCount: { $lte: 1 },
createdBefore: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000),
},
{ dryRun: true },
);
console.log(`Would delete ${preview.wouldDelete} memories`);
// Review and confirm
if (preview.wouldDelete < 100) {
const result = await cortex.memory.deleteMany("user-123-personal", {
importance: { $lte: 30 },
accessCount: { $lte: 1 },
createdBefore: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000),
});
console.log(`Deleted ${result.deleted} memories`);
console.log(`Affected users: ${result.affectedUsers?.join(", ")}`);
}
Errors:
CortexError('INVALID_FILTERS')- Filters are malformedCortexError('DELETION_CANCELLED')- User cancelled confirmation
See Also:
count()
Count memories matching filters without retrieving them.
Signature:
cortex.vector.count(
filter: CountMemoriesFilter
): Promise<number>
Parameters:
interface CountMemoriesFilter {
memorySpaceId: string; // Required
userId?: string; // Filter by user
sourceType?: "conversation" | "system" | "tool" | "a2a"; // Filter by source
}
Planned Features (Not Yet Functional):
tenantId- Multi-tenancy filter (defined in type but not passed to backend)participantId- Hive Mode filter (defined in type but not passed to backend)
Returns:
number- Count of matching memories
Example:
// Total memories
const total = await cortex.memory.count("user-123-personal");
// Count critical memories
const critical = await cortex.memory.count("user-123-personal", {
importance: { $gte: 90 },
});
// Count for specific user
const userCount = await cortex.memory.count("user-123-personal", {
userId: "user-123",
});
// Complex filter count
const oldUnused = await cortex.memory.count("user-123-personal", {
createdBefore: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000),
accessCount: { $lte: 1 },
importance: { $lte: 30 },
});
console.log(`Found ${oldUnused} old, unused, low-importance memories`);
Errors:
CortexError('INVALID_FILTERS')- Filters are malformed
See Also:
list()
List memories with filtering.
Signature:
cortex.memory.list(
filter: ListMemoriesFilter
): Promise<MemoryEntry[] | EnrichedMemory[]>
Parameters:
interface ListMemoriesFilter {
memorySpaceId: string; // Required
userId?: string; // Filter by user
sourceType?: "conversation" | "system" | "tool" | "a2a"; // Filter by source
limit?: number; // Default: 100
enrichFacts?: boolean; // Include related facts (returns EnrichedMemory[])
}
Planned Features (Not Yet Functional):
tenantId- Multi-tenancy filter (defined in type but not passed to backend)participantId- Hive Mode filter (defined in type but not passed to backend)
Returns:
MemoryEntry[] | EnrichedMemory[] // Array of memory entries
Returns EnrichedMemory[] when enrichFacts: true is specified, otherwise returns MemoryEntry[].
Example:
// Basic listing
const memories = await cortex.memory.list({
memorySpaceId: "user-123-personal",
limit: 50,
});
console.log(`Retrieved ${memories.length} memories`);
// Filtered listing by user
const userMemories = await cortex.memory.list({
memorySpaceId: "user-123-personal",
userId: "user-123",
limit: 100,
});
// Filtered listing by source type
const conversationMemories = await cortex.memory.list({
memorySpaceId: "user-123-personal",
sourceType: "conversation",
limit: 50,
});
// With enriched facts
const enrichedMemories = await cortex.memory.list({
memorySpaceId: "user-123-personal",
enrichFacts: true,
limit: 25,
});
enrichedMemories.forEach((m) => {
console.log(`Memory: ${m.memory.content}`);
console.log(`Facts: ${m.facts?.length ?? 0}`);
});
Errors:
CortexError('INVALID_FILTERS')- Filters are malformedCortexError('INVALID_PAGINATION')- Invalid limit/offset
See Also:
export()
Export memories to JSON or CSV format.
Signature:
cortex.memory.export(
memorySpaceId: string,
options?: ExportOptions
): Promise<string | ExportData>
Parameters:
interface ExportOptions {
memorySpaceId: string;
userId?: string; // Filter by user
format: "json" | "csv";
includeEmbeddings?: boolean; // Include embedding vectors in export
}
Planned Features (Not Yet Implemented):
outputPath- Write directly to file pathincludeVersionHistory- Include previousVersions arrayincludeConversationContext- Fetch and include ACID conversations
Returns:
interface ExportResult {
format: string; // "json" or "csv"
data: string; // The exported data as string
count: number; // Number of memories exported
exportedAt: number; // Unix timestamp in milliseconds
}
Example:
// Export all memories for a user (GDPR)
const userData = await cortex.memory.export("user-123-personal", {
userId: "user-123",
format: "json",
includeVersionHistory: true,
includeConversationContext: true, // Include ACID conversations
});
// Export critical memories only
const criticalBackup = await cortex.memory.export("user-123-personal", {
importance: { $gte: 90 },
format: "json",
outputPath: "backups/critical-memories.json",
});
console.log(`Exported to ${criticalBackup}`);
Errors:
CortexError('INVALID_FORMAT')- Format not supportedCortexError('EXPORT_FAILED')- File write error
See Also:
archive()
Soft delete a single memory (move to archive storage, recoverable).
Signature:
cortex.memory.archive(
memorySpaceId: string,
memoryId: string
): Promise<ArchiveResult>
Parameters:
memorySpaceId(string) - Memory space that contains the memorymemoryId(string) - Memory ID to archive
Returns:
// Layer 2 (vector.archive) return type:
interface VectorArchiveResult {
archived: boolean; // True if successfully archived
memoryId: string; // ID of archived memory
restorable: boolean; // True (can be restored)
}
// Layer 4 (memory.archive) return type includes additional fields:
interface MemoryArchiveResult extends VectorArchiveResult {
factsArchived: number; // Number of associated facts archived (Layer 4 only)
factIds: string[]; // IDs of archived facts (Layer 4 only)
}
The factsArchived and factIds fields are only returned by the Layer 4 memory.archive() API, not the Layer 2 vector.archive() API.
Example:
// Archive a specific memory
const result = await cortex.memory.archive("user-123-personal", "mem_abc123");
console.log(`Archived: ${result.archived}`);
console.log(`Memory ID: ${result.memoryId}`);
console.log(`Restorable: ${result.restorable}`);
// Restore from archive if needed
const restored = await cortex.memory.restoreFromArchive(
"user-123-personal",
"mem_abc123",
);
Errors:
CortexError('INVALID_FILTERS')- Filters are malformedCortexError('ARCHIVE_FAILED')- Archive operation failed
See Also:
restoreFromArchive()
Restore a previously archived memory back to active status.
Signature:
cortex.memory.restoreFromArchive(
memorySpaceId: string,
memoryId: string
): Promise<RestoreResult>
Parameters:
memorySpaceId(string) - Memory space that contains the archived memorymemoryId(string) - ID of the archived memory to restore
Returns:
interface RestoreResult {
restored: boolean; // True if successfully restored
memoryId: string; // ID of restored memory
memory: MemoryEntry; // The restored memory entry
}
Example:
// Restore a specific archived memory
const result = await cortex.memory.restoreFromArchive(
"user-123-personal",
"mem_abc123",
);
if (result.restored) {
console.log(`Restored memory: ${result.memoryId}`);
console.log(`Content: ${result.memory.content}`);
}
Errors:
CortexError('MEMORY_NOT_FOUND')- Memory doesn't existCortexError('MEMORY_NOT_ARCHIVED')- Memory is not in archived stateCortexError('PERMISSION_DENIED')- Memory space doesn't own this memory
See Also:
- archive() - Archive memories (soft delete)
- Soft Delete
Version Operations
getVersion()
Retrieve a specific version of a memory.
Signature:
cortex.memory.getVersion(
memorySpaceId: string,
memoryId: string,
version: number
): Promise<MemoryVersion | null>
Parameters:
memorySpaceId(string) - Memory space that contains the memorymemoryId(string) - Memory IDversion(number) - Version number to retrieve
Returns:
MemoryVersion- Specific versionnull- If version doesn't exist or was cleaned up by retention
Example:
// Get version 1
const v1 = await cortex.memory.getVersion("user-123-personal", "mem_abc123", 1);
if (v1) {
console.log(`v1 content: ${v1.content}`);
console.log(`v1 timestamp: ${v1.timestamp}`);
if (v1.conversationRef) {
console.log(`v1 ACID source: ${v1.conversationRef.conversationId}`);
}
} else {
console.log(
"Version 1 cleaned up by retention (but ACID source still available)",
);
}
Errors:
CortexError('MEMORY_NOT_FOUND')- Memory doesn't existCortexError('VERSION_NOT_FOUND')- Version doesn't exist
See Also:
getHistory()
Get all versions of a memory.
Signature:
cortex.memory.getHistory(
memorySpaceId: string,
memoryId: string
): Promise<MemoryVersion[]>
Parameters:
memorySpaceId(string) - Memory space that contains the memorymemoryId(string) - Memory ID
Returns:
MemoryVersion[]- Array of all versions (subject to retention)
Example:
const history = await cortex.memory.getHistory(
"user-123-personal",
"mem_abc123",
);
console.log(`Memory has ${history.length} versions:`);
history.forEach((v) => {
console.log(`v${v.version} (${v.timestamp}): ${v.content}`);
if (v.conversationRef) {
console.log(` ACID: ${v.conversationRef.conversationId}`);
}
});
// Note: With default retention=10, only last 10 versions returned
// But ACID conversations still have all source messages!
Errors:
CortexError('MEMORY_NOT_FOUND')- Memory doesn't exist
See Also:
getAtTimestamp()
Get memory state at a specific point in time (temporal query).
Signature:
cortex.vector.getAtTimestamp(
memorySpaceId: string,
memoryId: string,
timestamp: number | Date
): Promise<MemoryVersion | null>
Parameters:
memorySpaceId(string) - Memory space that contains the memorymemoryId(string) - Memory IDtimestamp(number | Date) - Point in time to query (Unix ms or Date object)
Returns:
MemoryVersion- Version that was current at that timenull- If memory didn't exist at that time or version cleaned up
Example:
// What was the password on August 1st?
const historicalMemory = await cortex.memory.getAtTimestamp(
"agent-1",
"mem_password",
new Date("2025-08-01T00:00:00Z"),
);
if (historicalMemory) {
console.log(`Password on Aug 1: ${historicalMemory.content}`);
// Can still get ACID source even if version cleaned up
if (historicalMemory.conversationRef) {
const conversation = await cortex.conversations.get(
historicalMemory.conversationRef.conversationId,
);
const sourceMsg = conversation.messages.find((m) =>
historicalMemory.conversationRef.messageIds.includes(m.id),
);
console.log(`Original message: ${sourceMsg.text}`);
}
} else {
console.log("Version not available (cleaned up), check ACID conversations");
}
Errors:
CortexError('MEMORY_NOT_FOUND')- Memory doesn't existCortexError('INVALID_TIMESTAMP')- Timestamp is invalid
See Also:
Universal Filters Reference
All filter options that work across operations:
interface UniversalFilters {
// Identity
userId?: string;
// Tags
tags?: string[];
tagMatch?: "any" | "all";
// Importance (0-100)
importance?: number | RangeQuery;
minImportance?: number; // Shorthand for { $gte }
// Dates
createdBefore?: Date;
createdAfter?: Date;
updatedBefore?: Date;
updatedAfter?: Date;
lastAccessedBefore?: Date;
lastAccessedAfter?: Date;
// Access patterns
accessCount?: number | RangeQuery;
version?: number | RangeQuery;
// Source
"source.type"?: "conversation" | "system" | "tool" | "a2a";
// Content
contentType?: "raw" | "summarized";
// ACID link
"conversationRef.conversationId"?: string;
// Metadata
metadata?: Record<string, any>;
// Results
limit?: number;
offset?: number;
sortBy?: string;
sortOrder?: "asc" | "desc";
}
Operations supporting universal filters:
search()count()list()updateMany()deleteMany()archive()export()
See Also:
Configuration
Version Retention
Configure per-agent or globally:
// Global configuration
const cortex = new Cortex({
convexUrl: process.env.CONVEX_URL,
defaultVersionRetention: 10, // Keep last 10 versions (default)
});
// Per-agent configuration
await cortex.agents.configure("audit-agent", {
memoryVersionRetention: -1, // Unlimited (keep all versions)
});
await cortex.agents.configure("temp-agent", {
memoryVersionRetention: 1, // Only current (no history)
});
See Also:
Graph-Lite Capabilities
Memory entries participate in the Cortex graph through references:
Memory as Graph Node:
- Each memory is a node in the implicit graph
- Connected to other entities via reference fields
Edges (Relationships):
conversationRef→ Links to Conversation (ACID source)immutableRef→ Links to Fact or KB ArticleuserId→ Links to UsermemorySpaceId→ Links to Memory SpaceparticipantId→ Links to Participant (Hive Mode)contextId(in metadata) → Links to Context
Graph Queries via Memory API:
// Find all memories in a workflow (via contextId edge)
const workflowMemories = await cortex.memory.search("user-123-personal", "*", {
metadata: { contextId: "ctx-001" },
});
// Trace memory to source conversation (via conversationRef edge)
const enriched = await cortex.memory.get("user-123-personal", memoryId, {
includeConversation: true, // ← Follow conversationRef edge
});
// Get all user's memories across agents (via userId edge)
const agents = await cortex.agents.list();
for (const agent of agents) {
const userMemories = await cortex.memory.search(agent.id, "*", {
userId: "user-123",
});
}
Performance:
- 1-2 hop queries: 10-50ms (direct lookups)
- 3-5 hop queries: 50-200ms (sequential queries)
Learn more: Graph Capabilities Guide
Error Reference
All memory operation errors:
| Error Code | Description | Cause |
|---|---|---|
INVALID_MEMORYSPACE_ID | Memory space ID is invalid | Empty or malformed memorySpaceId |
INVALID_MEMORY_SPACE_ID | Memory space ID is invalid | Empty or malformed memorySpaceId |
INVALID_MEMORY_ID | Memory ID is invalid | Empty or malformed memoryId |
INVALID_CONTENT | Content is invalid | Empty content or > 100KB |
INVALID_CONTENT_TYPE | Content type is invalid | Not "raw", "summarized", or "fact" |
INVALID_SOURCE_TYPE | Source type is invalid | Not valid source type |
INVALID_IMPORTANCE | Importance out of range | Not in 0-100 |
INVALID_IMPORTANCE_RANGE | Importance out of range | Not in 0-100 |
INVALID_MIN_SCORE_RANGE | Min score out of range | Not in 0-1 |
INVALID_LIMIT | Limit is invalid | Not a positive integer |
INVALID_VERSION | Version is invalid | Not a positive integer >= 1 |
INVALID_TIMESTAMP | Timestamp is invalid | Not a valid number or Date |
INVALID_TAGS | Tags are invalid | Not an array of strings |
INVALID_EMBEDDING | Embedding is invalid | Not an array of numbers |
INVALID_EXPORT_FORMAT | Export format is invalid | Not "json" or "csv" |
INVALID_QUERY_CATEGORY | Query category is invalid | Not a string |
MISSING_REQUIRED_FIELD | Required field missing | Required field not provided |
INVALID_EMBEDDING_DIMENSION | Embedding dimension mismatch | Wrong vector size |
MEMORY_NOT_FOUND | Memory doesn't exist | Invalid memoryId |
MEMORY_NOT_ARCHIVED | Memory is not archived | Cannot restore non-archived memory |
VERSION_NOT_FOUND | Version doesn't exist | Cleaned up by retention |
PERMISSION_DENIED | Access denied | Agent doesn't own memory |
INVALID_FILTERS | Filters malformed | Bad filter syntax |
CONVEX_ERROR | Database error | Convex operation failed |
CLOUD_MODE_REQUIRED | Feature requires Cloud | autoEmbed/autoSummarize in Direct mode |
VectorValidationError
The Vector API exports a custom error class for validation failures:
import { VectorValidationError } from "@cortex-ai/sdk/vector";
try {
await cortex.vector.store(memorySpaceId, input);
} catch (error) {
if (error instanceof VectorValidationError) {
console.log(`Validation failed: ${error.message}`);
console.log(`Error code: ${error.code}`); // e.g., "INVALID_IMPORTANCE_RANGE"
console.log(`Field: ${error.field}`); // e.g., "metadata.importance"
}
}
VectorValidationError Properties:
message(string) - Human-readable error messagecode(string) - Machine-readable error codefield(string, optional) - Name of the field that caused the error
See Also:
Next Steps
Questions? Ask in GitHub Discussions.