OpenClaw Memory
Agents that forget your name, lose project context mid-conversation, or blank on yesterday's decisions are usually misconfigured, not broken. OpenClaw treats memory as a suggestion—the model decides what to save and when to search. Without explicit setup, it forgets by default.
This guide covers why memory fails, the config changes that fix most issues, and the advanced tools that make it production-grade.
Why Memory Breaks
There are three common failure modes. Each needs a different fix.
1. Never Saved
The agent decides in real time whether something is worth storing. Important facts—preferences, decisions, project context—often never make it to disk because the model judged them unworthy.
2. Saved But Not Retrieved
Facts can be on disk while the agent answers from its current context instead of searching. It has a memory_search tool but must choose to use it. Often it doesn't.
3. Destroyed by Compaction
To stay under token limits, older messages get summarized or dropped. Information that only lived in the active conversation—or even in MEMORY.md—can be compacted away mid-session before it's persisted.
Essential Config Changes
Most forgetfulness comes from running defaults. These four changes improve recall significantly.
Memory Flush
Enable memory flush in compaction. It runs a silent turn before compaction that prompts the agent to write durable memories to disk. Customize the prompt to focus on decisions, state changes, lessons, and blockers. Raise softThresholdTokens (e.g. 40k) so flushes happen earlier, before useful context gets compacted away.
Context Pruning
Use TTL mode so old messages are pruned predictably. Keep messages from the last 6 hours and preserve the last few assistant replies. This avoids the jarring "repeat everything" experience after a flush and helps control token costs.
Hybrid Search
Enable hybrid memory search: vector similarity plus BM25 keyword search. Vector search handles conceptual matches; BM25 catches exact tokens like error codes and project names. Without both, you leave accuracy on the table.
Session Indexing
Index past sessions so the agent can recall conversations from weeks ago. Chunk and index transcripts alongside memory files. Questions like "What did we decide about X last Tuesday?" become answerable.
Advanced Tools
When config alone isn't enough—multi-day projects, multi-agent teams, or complex knowledge management—these options extend memory beyond the built-in system.
Jumbo
A CLI tool that gives your coding agent memory like an elephant. Tracks project details, architecture, and goals—then delivers optimized context when your agent needs it. Works with Claude Code, Copilot CLI, Gemini CLI, and more. All data stays local. Addresses the "saved but not retrieved" failure mode by proactively serving context before the agent even asks.
jumbocontext.comQMD
An opt-in replacement for the built-in SQLite indexer. Runs as a local sidecar combining BM25, vectors, and reranking. Retrieval quality is noticeably better. You can index external collections: Obsidian vaults, project docs, Notion exports. Install via the QMD GitHub; have your agent review the docs before implementing.
QMD on GitHubMem0
Stores memories outside the context window so compaction cannot destroy them. Auto-capture detects and stores information without relying on the model's judgment. Auto-recall searches and injects relevant memories before each response. Installs as an OpenClaw plugin. Addresses the "never saved" and "destroyed by compaction" failure modes.
Mem0Cognee
Builds a knowledge graph from your data. When you need to query relationships between people, places, and things, vector search falls short. Cognee ingests memory files to construct entities and relationships. Good for enterprise settings or multi-agent teams. Setup involves Docker and is more involved than basic plugins.
Cognee on GitHubObsidian
Popular as an external brain for agent memory. Symlink your memory folder into Obsidian so daily notes appear across devices for review and editing. For deeper integration, index your vault via QMD so everything you capture in Obsidian becomes searchable by agents. Obsidian 1.12 added a CLI for metadata queries, which can reduce token costs versus reading full files.
ObsidianMulti-Agent Memory
For teams of specialized agents, structure memory like human team documentation:
- Private memory per agent — Each agent has its own workspace with MEMORY.md and daily notes.
- Shared reference files — Symlink a
_shared/directory with user profile, agent roster, and team conventions so every agent sees the same ground truth. - QMD with shared paths — Point each agent's QMD config at the shared directory so they can search the same reference docs while keeping private memory separate.
- Coordination role — A "Chief of Staff" agent can read core files at session start and maintain consistency; specialists focus on their domains.
Some things are shared (handbook, org chart, project docs). Some are private (personal notes, work in progress). Build the same structure for your agents.
The Takeaway
Memory is not automatic. You have to configure it. Start with the four essential config changes, then layer in QMD, Mem0, Cognee, or Obsidian as your workload demands. Once you understand that OpenClaw treats memory as suggestions, the fixes become straightforward.