Comprehensive tutorial series for OpenClaw AI agent gateway
Understanding how OpenClaw’s memory layers work together.
OpenClaw’s memory is plain Markdown at its core. The files are the source of truth — vector indexes are just acceleration layers on top.
This means:
~/.openclaw/workspace/
├── MEMORY.md # Long-term curated memory
└── memory/
├── 2026-03-18.md # Daily log (today)
├── 2026-03-17.md # Daily log (yesterday)
└── ... # Older logs
Key insight: The agent “remembers” only what gets written to disk.
At session start, OpenClaw automatically loads:
| File | When Loaded |
|---|---|
memory/YYYY-MM-DD.md (today) |
Always |
memory/YYYY-MM-DD.md (yesterday) |
Always |
MEMORY.md |
Main/private sessions only |
This gives the agent immediate context without searching.
~/.openclaw/memory/main.sqlite
Contains:
Auto-updates: Watches memory files, reindexes on change (debounced).
Combines:
Agent writes to memory/2026-03-18.md
│
▼
File watcher detects change (1.5s debounce)
│
▼
Chunker splits into ~400 token chunks
│
▼
Embedding provider generates vectors
│
▼
SQLite stores chunks + vectors
Agent calls memory_search("project deadline")
│
▼
Query embedded by same provider
│
▼
Parallel retrieval:
├── Vector: top K by cosine similarity
└── BM25: top K by keyword relevance
│
▼
Merge with configurable weights
│
▼
Optional: MMR re-ranking (diversity)
│
▼
Optional: Temporal decay (recency)
│
▼
Return top results with snippets
Purpose: Durable facts, preferences, decisions, lessons learned
Format:
# Long-Term Memory
## Preferences
- User prefers concise answers
- Timezone: America/New_York
## Projects
- OpenClaw tutorials (active)
- Cryptoart curation
## People
- Alice: collaborator on Project X
- Bob: technical reviewer
## Decisions
- 2026-03-15: Chose Gemini for embeddings (cost/quality balance)
## Lessons Learned
- Always verify dmScope before opening to users
When to write here:
Purpose: Running notes, session context, ephemeral observations
Format:
# 2026-03-18
## Morning
- Discussed tutorial structure
- Decided on 5-module approach
## Afternoon
- Implemented memory configuration
- User requested more examples
## Tasks
- [ ] Add hybrid search examples
- [x] Document dmScope settings
## Notes
- User seems interested in Coach integration
When to write here:
Daily Logs (raw input)
│
│ Periodic review / agent suggestion
▼
MEMORY.md (curated output)
The pattern:
~/.openclaw/memory/main.sqlite
Tables:
chunks — text content, file path, line numbers
embeddings — vector data (or in vec0 virtual table)
metadata — provider, model, chunk params
Automatic:
Manual:
openclaw memory reindex
With sqlite-vec (default when available):
vec0 virtual tableFallback (no sqlite-vec):
┌─────────────────────────────────────────────────────────┐
│ Memory Manager │
└─────────────────────────────────────────────────────────┘
│
┌────────────────┼────────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Gemini │ │ OpenAI │ │ Local │
│ Provider │ │ Provider │ │ Provider │
└──────────┘ └──────────┘ └──────────┘
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Gemini │ │ OpenAI │ │node-llama│
│ API │ │ API │ │ cpp │
└──────────┘ └──────────┘ └──────────┘
Provider selection priority:
memorySearch.provider in configBy default:
MEMORY.md (workspace root)memory.md (fallback, lowercase)memory/**/*.md (all daily logs)With extraPaths:
{
memorySearch: {
extraPaths: ["../team-docs", "/srv/notes"],
},
}
{
memorySearch: {
scope: {
default: "deny",
rules: [
{ action: "allow", match: { chatType: "direct" } },
{ action: "deny", match: { keyPrefix: "discord:channel:" } },
],
},
},
}
This prevents memory search in group chats while allowing in DMs.
Agent: I need to find information about project deadlines
Tool call: memory_search
query: "project deadline"
limit: 5
Result:
- memory/2026-03-15.md:12-18 (score: 0.89)
"Project X deadline is March 30th..."
- MEMORY.md:45-50 (score: 0.72)
"Key deadlines tracked in Notion..."
Agent: Let me read the full context from that file
Tool call: memory_get
path: "memory/2026-03-15.md"
startLine: 10
lines: 20
Result:
[Full content of lines 10-30]
| Layer | Purpose | Storage |
|---|---|---|
| Markdown files | Source of truth, human-readable | Workspace filesystem |
| Auto-load | Immediate context | Session memory |
| Vector index | Fast semantic search | SQLite database |
| Hybrid search | Best of both worlds | Runtime merge |
| Post-processing | Diversity + recency | Runtime filters |
Learn about the foundation: Markdown Memory →