Five-Minute Quickstart
The Cortex CLI guides you through setting up a complete AI agent with persistent memory. This guide walks you through every option in the cortex init wizard.
Prerequisites: Node.js 20+, npm
Step 1: Install the CLI
$ brew install cortex-memory/tap/cli$ npm install -g @cortexmemory/cliVerify installation:
$ cortex --versionVersion 0.27.4 or higher
Step 2: Run the Init Wizard
$ cortex initOr specify a directory:
$ cortex init my-agentThe wizard walks you through six configuration steps. Let's explore each one.
Init Wizard Options
Option 1: Project Name
$ ? Project name: my-cortex-agent- Requirements: Lowercase letters, numbers, hyphens, underscores only
- Default:
my-cortex-agent - Creates: Directory with this name in current folder
If the directory exists and has files, the wizard asks: "Add Cortex to existing project?" This lets you add Cortex to an existing codebase without overwriting your files.
Option 2: Convex Database Setup
$ ? How would you like to set up Convex?
❯ Local development (fast, recommended)
Create new cloud project
Use existing Convex projectThis is the most important choice. Cortex uses Convex as its backend database.
What it does:
- Starts a local Convex server on
http://127.0.0.1:3210 - No account required
- Data stored locally
- Full feature support including vector search
Best for:
- Learning Cortex
- Local development
- Offline work
- Quick prototyping
Requires: Nothing extra—works immediately
What it does:
- Creates a new project on Convex Cloud
- Deploys your backend to
https://[project].convex.cloud - Persistent data across machines
- Automatic scaling
Best for:
- Production deployments
- Team collaboration
- Sharing your agent
Requires: Free Convex account (wizard prompts login if needed)
What it does:
- Connects to a Convex project you already have
- Deploys Cortex functions to your existing backend
Best for:
- Adding Cortex to an existing Convex app
- Migrating from another setup
Requires: Existing Convex project name
Yes! Use cortex config add-deployment to add more environments (local, staging, production) anytime.
Option 3: Graph Database (Optional)
$ ? Enable graph database integration? (Y/n)Graph databases enable relationship queries and knowledge graphs—powerful for complex AI agents.
What it is: Enterprise-grade graph database with Cypher query language
Best for:
- Production deployments
- Complex relationship queries
- Large-scale knowledge graphs
Local setup: Docker container on bolt://localhost:7687
Cloud option: Use Neo4j Aura or any Neo4j instance
What it is: In-memory graph database optimized for real-time analytics
Best for:
- High-throughput applications
- Real-time relationship queries
- Analytics workloads
Local setup: Docker container on bolt://localhost:7687
You can always add graph integration later. Cortex works great without it—the graph layer adds advanced relationship queries on top of the core memory features.
Local graph databases require Docker Desktop. The wizard checks for Docker and shows installation instructions if missing.
Option 4: OpenAI API Key (Optional)
$ ? Configure OpenAI API key now? (y/N)- Used for: AI-powered embeddings and fact extraction
- Required?: No—you can add it to
.env.locallater - Get a key: platform.openai.com/api-keys
Cortex is embedding-agnostic. You can use OpenAI, or provide your own embeddings from any provider (Cohere, Voyage, local models, etc.).
Option 5: CLI Scripts
$ ? Add Cortex CLI scripts to package.json? (Y/n)Adds convenient npm scripts:
{
"scripts": {
"cortex": "cortex",
"cortex:setup": "cortex setup",
"cortex:stats": "cortex db stats",
"cortex:spaces": "cortex spaces list",
"cortex:status": "cortex status"
}
}
Option 6: Vercel AI Quickstart (Optional)
$ ? Install Vercel AI quickstart demo? (y/N)The basic template is always installed—it includes a full CLI and HTTP API for testing Cortex. The Vercel AI Quickstart is an optional addition that gives you a web-based chat UI.
| Template | Installed | What You Get |
|---|---|---|
| Basic Template | Always | Interactive CLI + HTTP API server |
| Vercel AI Quickstart | Optional | Next.js ChatGPT-style web app |
Either template lets you immediately start chatting and see Cortex memory in action. Choose based on whether you prefer terminal or browser.
Configuration Summary
Before proceeding, the wizard shows a summary:
$ Configuration Summary
──────────────────────────────────────────────
Project: my-cortex-agent
Location: /Users/you/my-cortex-agent
Type: New project
Convex: Local development
Graph DB: neo4j
OpenAI: Configured
CLI Scripts: Yes
──────────────────────────────────────────────
? Proceed with setup? (Y/n)What Gets Created
After confirmation, the wizard:
- Creates project files -
package.json, TypeScript config, basic agent template - Installs dependencies -
@cortexmemory/sdk,convex, and related packages - Sets up Cortex backend - Deploys Convex functions for memory operations
- Creates
.env.local- Environment variables for your configuration - Configures graph database - Docker compose file and env vars (if enabled)
- Saves to
~/.cortexrc- Persistent CLI configuration
Step 3: Start Development
After setup, start your services:
$ cortex startOr use interactive mode with live dashboard:
$ cortex devcortex dev provides a live dashboard with keyboard shortcuts:
s- Show statusr- Restart servicesk- Kill stuck portsq- Quit
Step 4: Start Using Cortex
Basic Template — CLI + HTTP API (always installed) | Vercel AI Quickstart — Web chat UI (optional)
The basic template gives you an interactive CLI and HTTP API to explore Cortex immediately.
Chat via CLI
$ npm startYou'll see an interactive prompt where you can chat and watch memory orchestration in real-time:
You: Hi, I'm Alex. I work as a software engineer.
┌────────────────────────────────────────────────────────────────────┐
│ MEMORY ORCHESTRATION │
├────────────────────────────────────────────────────────────────────┤
│ 📦 Memory Space ✓ complete (2ms) │
│ 👤 User ✓ complete (5ms) │
│ 🤖 Agent ✓ complete (3ms) │
│ 💬 Conversation ✓ complete (8ms) │
│ 🎯 Vector Store ✓ complete (45ms) │
│ 💡 Facts ✓ complete [NEW] (120ms) │
│ → "User's name is Alex" (identity, 95%) │
│ → "User works as software engineer" (employment, 90%) │
└────────────────────────────────────────────────────────────────────┘
CLI Commands:
| Command | Description |
|---|---|
/recall <query> | Search memories without storing |
/facts | List all extracted facts |
/history | Show conversation history |
/new | Start a new conversation |
/config | Show current configuration |
/exit | Exit the demo |
Or Use the HTTP API
$ npm run serverThen send requests from another terminal:
curl -X POST http://localhost:3001/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hi, I am Alex. I work as a software engineer."}'
API Endpoints:
| Endpoint | Method | Description |
|---|---|---|
/chat | POST | Chat and store memory |
/recall?query=... | GET | Search memories |
/facts | GET | List extracted facts |
/history/:id | GET | Get conversation history |
/health | GET | Health check |
If you installed the Vercel AI Quickstart, open your browser:
You'll see a fully functional ChatGPT-style interface powered by Cortex + Vercel AI SDK v6.
Try it:
- Chat normally - "Hi, I'm Alex. I work as a software engineer."
- Ask about yourself - "What do I do for work?"
- Watch memory work - The agent remembers facts across conversations
Features:
- Real-time chat with streaming responses
- Memory visualization panel
- Fact extraction display
- Multi-conversation support
- User authentication
Open the Convex dashboard (cortex convex dashboard) alongside the chat to watch memories being stored in real-time.
CLI Flags for Non-Interactive Setup
Skip the wizard with flags:
$ cortex init my-agent --local --skip-graph -y| Flag | Description |
|---|---|
--local | Use local Convex (no prompts) |
--cloud | Use cloud Convex (no prompts) |
--skip-graph | Skip graph database setup |
-t, --template <name> | Template to use (default: basic) |
-y, --yes | Skip all confirmation prompts |
--start | Start services after setup |
Explore Your Data
Convex Dashboard
$ cortex convex dashboardOpens the correct dashboard for your deployment (local or cloud).
| Table | Description |
|---|---|
conversations | All conversation threads |
memories | Searchable memory index |
immutable | Versioned message history |
users | User profiles |
memorySpaces | Memory space registry |
CLI Commands
$ cortex db stats$ cortex spaces list$ cortex memory list --space my-agentTroubleshooting
$ cortex config testEnsure Convex is running and CONVEX_URL is set in .env.local.
Use cortex dev then press k to kill stuck processes, or:
$ cortex stopInstall Docker Desktop from docker.com, or choose "Cloud/Existing instance" for the graph database.
The wizard handles authentication automatically. If you see auth errors:
$ npx convex logout$ npx convex loginThen run cortex init again.