Skip to main content

API Reference

Info
Last Updated: 2026-01-09 | Version: v0.29.0+

Complete API documentation for @cortexmemory/vercel-ai-provider (SDK v0.21.0+)

createCortexMemory(config)

Creates a memory-augmented model factory with manual memory control methods.

Signature

function createCortexMemory(config: CortexMemoryConfig): CortexMemoryModel;

Parameters

CortexMemoryConfig

PropertyTypeRequiredDefaultDescription
convexUrlstringYes-Convex deployment URL
memorySpaceIdstringYes-Memory space for isolation
userIdstring | () => string|Promise<string>Yes-User ID (static or function)
agentIdstringYes-Agent ID (required v0.17.0+)
userNamestringNo'User'Display name for user
agentNamestringNoagentIdDisplay name for agent
conversationIdstring | () => stringNoauto-generatedConversation ID
embeddingProviderobjectNoundefinedCustom embedding provider
memorySearchLimitnumberNo5Max memories to retrieve
minMemoryRelevancenumberNo0.7Min relevance score (0-1)
enableMemorySearchbooleanNotrueAuto-search before generation
enableMemoryStoragebooleanNotrueAuto-store after generation
contextInjectionStrategy'system'|'user'|'custom'No'system'Where to inject context
customContextBuilderfunctionNoundefinedCustom context formatter
enableFactExtractionbooleanNofalseExtract facts from conversations
factExtractionConfigobjectNoundefinedFact extraction configuration
extractFactsfunctionNoundefinedCustom fact extraction
enableGraphMemorybooleanNofalseSync to graph database
graphConfigobjectNoundefinedGraph database configuration
hiveModeobjectNoundefinedCross-app memory config
defaultImportancenumberNo50Default importance (0-100)
defaultTagsstring[]No[]Default tags
streamingOptionsobjectNoundefinedStreaming enhancement options
streamingHooksobjectNoundefinedReal-time streaming callbacks
beliefRevisionobject | falseNoundefinedBelief revision configuration
enableStreamMetricsbooleanNotrueEnable streaming metrics
layerObserverobjectNoundefinedLayer orchestration observer
debugbooleanNofalseEnable debug logging
loggerobjectNoconsoleCustom logger

factExtractionConfig

factExtractionConfig?: {
model?: string; // Override fact extraction model (default: gpt-4o-mini)
provider?: 'openai' | 'anthropic'; // Provider to use
}

graphConfig

graphConfig?: {
uri?: string; // Graph database URI
username?: string; // Graph database username
password?: string; // Graph database password
type?: 'neo4j' | 'memgraph'; // Database type
}

streamingOptions

streamingOptions?: {
storePartialResponse?: boolean; // Store during streaming
partialResponseInterval?: number; // Update interval (ms)
progressiveFactExtraction?: boolean; // Extract facts incrementally
factExtractionThreshold?: number; // Characters per extraction
progressiveGraphSync?: boolean; // Sync graph incrementally
graphSyncInterval?: number; // Graph sync interval (ms)
partialFailureHandling?: 'store-partial' | 'rollback' | 'retry' | 'best-effort';
maxRetries?: number; // Max retry attempts
generateResumeToken?: boolean; // Generate resume tokens
streamTimeout?: number; // Timeout (ms)
maxResponseLength?: number; // Max response length
enableAdaptiveProcessing?: boolean; // Auto-optimize processing
}

streamingHooks

streamingHooks?: {
onChunk?: (event: {
chunk: string;
chunkNumber: number;
accumulated: string;
timestamp: number;
estimatedTokens: number;
}) => void | Promise<void>;
onProgress?: (event: {
bytesProcessed: number;
chunks: number;
elapsedMs: number;
estimatedCompletion?: number;
currentPhase?: "streaming" | "fact-extraction" | "storage" | "finalization";
}) => void | Promise<void>;
onError?: (error: {
message: string;
code?: string;
phase?: string;
recoverable?: boolean;
resumeToken?: string;
}) => void | Promise<void>;
onComplete?: (event: {
fullResponse: string;
totalChunks: number;
durationMs: number;
factsExtracted: number;
}) => void | Promise<void>;
}

beliefRevision

beliefRevision?: {
enabled?: boolean; // Enable belief revision
slotMatching?: boolean; // Enable slot matching
llmResolution?: boolean; // Enable LLM-based resolution
} | false;

layerObserver

layerObserver?: {
onLayerUpdate?: (event: LayerEvent) => void | Promise<void>;
onOrchestrationStart?: (orchestrationId: string) => void | Promise<void>;
onOrchestrationComplete?: (summary: OrchestrationSummary) => void | Promise<void>;
}

Returns

CortexMemoryModel

A function that wraps language models + manual memory methods:

interface CortexMemoryModel {
// Model wrapping
(model: LanguageModelV1): LanguageModelV1;

// Manual memory methods
search(query: string, options?: SearchOptions): Promise<MemoryEntry[]>;
remember(
userMsg: string,
agentResp: string,
options?: RememberOptions,
): Promise<void>;
getMemories(options?: { limit?: number }): Promise<MemoryEntry[]>;
clearMemories(options?: ClearOptions): Promise<number>;
getConfig(): Readonly<CortexMemoryConfig>;
}

createCortexMemoryAsync(config)

Async version for automatic graph database configuration from environment variables.

Signature

async function createCortexMemoryAsync(
config: CortexMemoryConfig,
): Promise<CortexMemoryModel>;

Example

// Reads graph config from NEO4J_URI, NEO4J_USERNAME, NEO4J_PASSWORD
const cortexMemory = await createCortexMemoryAsync({
convexUrl: process.env.CONVEX_URL!,
memorySpaceId: "smart-agent",
userId: "user-123",
agentId: "my-assistant",
});

Model Wrapping

Syntax

const wrappedModel = cortexMemory(underlyingModel);

Examples

import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";

const cortexMemory = createCortexMemory({
convexUrl: process.env.CONVEX_URL!,
memorySpaceId: "my-agent",
userId: "user-123",
agentId: "my-assistant", // Required!
});

// Wrap any AI SDK provider
const gpt4 = cortexMemory(openai("gpt-4o-mini"));
const claude = cortexMemory(anthropic("claude-3-opus"));
const gemini = cortexMemory(google("gemini-pro"));

// Use with AI SDK functions
await streamText({ model: gpt4, messages });
await generateText({ model: claude, messages });
await generateObject({ model: gemini, prompt, schema });

Manual Memory Methods

search(query, options?)

Search memories manually.

async search(
query: string,
options?: {
limit?: number;
minScore?: number;
tags?: string[];
userId?: string;
embedding?: number[];
sourceType?: 'conversation' | 'system' | 'tool' | 'a2a';
minImportance?: number;
}
): Promise<MemoryEntry[]>

Example:

const memories = await cortexMemory.search("user preferences", {
limit: 10,
minScore: 0.8,
tags: ["important"],
});

console.log(memories);
// [{ content: '...', metadata: {...}, ... }]

remember(userMessage, agentResponse, options?)

Store a conversation manually.

async remember(
userMessage: string,
agentResponse: string,
options?: {
conversationId?: string;
generateEmbedding?: (text: string) => Promise<number[]>;
extractFacts?: (userMsg: string, agentResp: string) => Promise<Fact[]>;
// Note: syncToGraph removed in v0.29.0+ - sync is automatic when CORTEX_GRAPH_SYNC=true
}
): Promise<void>

Example:

// Graph sync is automatic when CORTEX_GRAPH_SYNC=true (v0.29.0+)
await cortexMemory.remember(
"My favorite color is blue",
"I will remember that!",
{
conversationId: "conv-123",
// No syncToGraph needed - sync is automatic
},
);

getMemories(options?)

Get all memories (paginated).

async getMemories(
options?: {
limit?: number;
}
): Promise<MemoryEntry[]>

Example:

const all = await cortexMemory.getMemories({ limit: 100 });
console.log(`Total memories: ${all.length}`);

clearMemories(options)

Clear memories (requires confirmation).

async clearMemories(
options: {
confirm: boolean;
userId?: string;
sourceType?: 'conversation' | 'system' | 'tool' | 'a2a';
}
): Promise<number>

Example:

// Clear all memories
const deleted = await cortexMemory.clearMemories({ confirm: true });
console.log(`Deleted ${deleted} memories`);

// Clear specific user's memories
await cortexMemory.clearMemories({
confirm: true,
userId: "user-123",
});

getConfig()

Get current configuration (read-only).

getConfig(): Readonly<CortexMemoryConfig>

Example:

const config = cortexMemory.getConfig();
console.log(`Memory Space: ${config.memorySpaceId}`);
console.log(`Agent: ${config.agentId}`);

Types

MemoryEntry

interface MemoryEntry {
_id: string; // Internal ID
memoryId: string; // Memory identifier
memorySpaceId: string; // Memory space for isolation
tenantId?: string; // Tenant identifier
participantId?: string; // Participant identifier
userId?: string; // User identifier
agentId?: string; // Agent identifier
content: string; // Memory content
contentType: ContentType; // Content type classification
embedding?: number[]; // Vector embedding
sourceType: "conversation" | "system" | "tool" | "a2a";
sourceUserId?: string; // Source user ID
sourceUserName?: string; // Source user name
sourceTimestamp: number; // Source timestamp
messageRole?: "user" | "agent" | "system";
conversationRef?: ConversationRef;
immutableRef?: ImmutableRef;
mutableRef?: MutableRef;
factsRef?: FactsRef;
importance: number; // 0-100
tags: string[]; // Tags for categorization
enrichedContent?: string; // Enriched/processed content
factCategory?: string; // Fact category classification
version: number; // Version number
previousVersions: MemoryVersion[]; // Version history
createdAt: number; // Creation timestamp
updatedAt: number; // Last update timestamp
lastAccessed?: number; // Last access timestamp
accessCount: number; // Access count
}

LayerEvent

interface LayerEvent {
layer:
| "memorySpace"
| "user"
| "agent"
| "conversation"
| "vector"
| "facts"
| "graph";
status: "pending" | "in_progress" | "complete" | "error" | "skipped";
timestamp: number;
latencyMs?: number;
data?: {
id?: string;
preview?: string;
metadata?: Record<string, unknown>;
};
error?: {
message: string;
code?: string;
};
}

OrchestrationSummary

interface OrchestrationSummary {
orchestrationId: string;
totalLatencyMs: number;
layers: Record<MemoryLayer, LayerEvent>;
createdIds: {
conversationId?: string;
memoryIds?: string[];
factIds?: string[];
};
}

Exported Types

export type {
CortexMemoryConfig,
CortexMemoryModel,
ManualMemorySearchOptions,
ManualRememberOptions,
ManualClearOptions,
ContextInjectionStrategy,
SupportedProvider,
LayerObserver,
LayerEvent,
LayerStatus,
MemoryLayer,
OrchestrationSummary,
RevisionAction,
} from "@cortexmemory/vercel-ai-provider";
Info

AI SDK v6 compatibility layer APIs (including isV6Available, createMemoryPrepareCall, CortexCallOptions, etc.) are supported. The Vercel AI Provider is compatible with AI SDK versions 3, 4, 5, and 6.

Next Steps