Mem0 Node SDK (v3.0.0)
A ground-up redesign of how memories are extracted, stored, and retrieved — plus a long-overdue cleanup of the SDK surface.
Highlights
- New extraction algorithm — single-pass, ADD-only, roughly half the latency
- Multi-signal hybrid retrieval — semantic + BM25 keyword + entity matching fused into one score
- Built-in entity linking — replaces graph memory with no external store to manage
- Fully camelCase SDK — both ways — every parameter and every field on every response is camelCase, end to end
What's new
Single-pass ADD-only extraction
One LLM call per add(). No separate UPDATE/DELETE pass. The model now spends its capacity on understanding the input instead of diffing against existing memories, and agent-generated facts ("I've booked your flight for March 3rd") are captured as first-class memories for the first time. Hash-based deduplication prevents exact duplicates; ranking at retrieval time handles the rest.
Hybrid retrieval
Semantic vector similarity, BM25 keyword matching, and entity-graph boosting are normalized and fused into a single score on every result. Entities extracted from both new memories and queries boost ranking on matching results.
Entity linking (replaces graph memory)
Entities (proper nouns, quoted text, compound noun phrases) are extracted automatically during add() and stored in a parallel {collection}_entities collection inside your existing vector store. At query time, entities from the query boost ranking on matching memories. No Neo4j / Memgraph / Kuzu / Apache AGE deployment needed — graph driver support has been removed from the SDK.
Vector store enhancements
All supported vector stores now implement keyword search and batch search primitives, used transparently by the new retrieval pipeline.
Stricter, clearer validation
- Empty or whitespace-only entity IDs throw with a specific message
thresholdmust be in[0, 1]— out-of-range values throwmessagesmust be a string or array;null/undefinedthrowscustomPrompt→ renamed tocustomInstructions
Platform client cleanup
new MemoryClient({ apiKey })—organizationId,projectId,organizationName,projectNameremoved- All request parameters are now camelCase —
userId,agentId,topK,filters— the SDK handles snake_case conversion at the wire boundary - All response fields are now camelCase too —
createdAt,updatedAt,userId,agentId,runId,memoryId,eventId,scoreBreakdown, and every other field come back in camelCase. No more mixinguser_idin the response withuserIdin the request — one casing, everywhere. getAllreturns the paginated envelope{count, next, previous, results}when the filter-based API is usedaddis async by default and returns{status: "PENDING", eventId}for polling- Removed:
OutputFormatenum,API_VERSIONenum,enableGraph,asyncMode,outputFormat,filterMemories,batchSize,forceAddOnly,immutable,expirationDate,includes,excludes,keywordSearch
Breaking changes at a glance
| Change | Migration |
|---|---|
search(query, { limit: 10 })
| Rename to topK: 10
|
topK default 100 → 20
| Pass topK: 100 explicitly to restore
|
Entity IDs on search() / getAll()
| Must be inside filters: { userId: "..." }; top-level throws
|
| Client SDK param casing | All params now camelCase — rename user_id → userId, top_k → topK, etc.
|
customPrompt
| Renamed to customInstructions
|
enableGraph / graphStore
| Removed — graph memory is no longer supported |
| Internal payload key | text_lemmatized → textLemmatized (keep collections language-scoped if you share Python/TS)
|
add() and deleteAll() continue to accept entity IDs at the top level.
Install
npm install mem0ai@latestQuick example
import MemoryClient from "mem0ai";
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
await client.add(messages, { userId: "alice" });
const results = await client.search("what does alice like?", {
filters: { AND: [{ user_id: "alice" }] },
topK: 10,
});Full migration guide: docs.mem0.ai/migration/oss-v2-to-v3