mem0▌
mem0ai/mem0 · updated Apr 8, 2026
Skill Graph: This skill is part of the Mem0 skill graph:
Mem0 Platform Integration
Skill Graph: This skill is part of the Mem0 skill graph:
- mem0 (this skill) -- Platform Client SDK + OSS (Python + TypeScript)
- mem0-cli (GitHub) -- Command-line interface
- mem0-vercel-ai-sdk (GitHub) -- Vercel AI SDK provider
Mem0 is a managed memory layer for AI applications. It stores, retrieves, and manages user memories via API — no infrastructure to deploy. For self-hosted usage, see the OSS section in the client references below.
Step 1: Install and authenticate
Python:
pip install mem0ai
export MEM0_API_KEY="m0-your-api-key"
TypeScript/JavaScript:
npm install mem0ai
export MEM0_API_KEY="m0-your-api-key"
Get an API key at: https://app.mem0.ai/dashboard/api-keys
Step 2: Initialize the client
Python:
from mem0 import MemoryClient
client = MemoryClient(api_key="m0-xxx")
TypeScript:
import MemoryClient from 'mem0ai';
const client = new MemoryClient({ apiKey: 'm0-xxx' });
For async Python, use AsyncMemoryClient.
Step 3: Core operations
Every Mem0 integration follows the same pattern: retrieve → generate → store.
Add memories
messages = [
{"role": "user", "content": "I'm a vegetarian and allergic to nuts."},
{"role": "assistant", "content": "Got it! I'll remember that."}
]
client.add(messages, user_id="alice")
Search memories
results = client.search("dietary preferences", user_id="alice")
for mem in results.get("results", []):
print(mem["memory"])
Get all memories
all_memories = client.get_all(user_id="alice")
Update a memory
client.update("memory-uuid", text="Updated: vegetarian, nut allergy, prefers organic")
Delete a memory
client.delete("memory-uuid")
client.delete_all(user_id="alice") # delete all for a user
Common integration pattern
from mem0 import MemoryClient
from openai import OpenAI
mem0 = MemoryClient()
openai = OpenAI()
def chat(user_input: str, user_id: str) -> str:
# 1. Retrieve relevant memories
memories = mem0.search(user_input, user_id=user_id)
context = "\n".join([m["memory"] for m in memories.get("results", [])])
# 2. Generate response with memory context
response = openai.chat.completions.create(
model="gpt-4.1-nano-2025-04-14",
messages=[
{"role": "system", "content": f"User context:\n{context}"},
{"role": "user", "content": user_input},
]
)
reply = response.choices[0].message.content
# 3. Store interaction for future context
mem0.add(
[{"role": "user", "content": user_input}, {"role": "assistant", "content": reply}],
user_id=user_id
)
return reply
Common edge cases
- Search returns empty: Memories process asynchronously. Wait 2-3s after
add()before searching. Also verifyuser_idmatches exactly (case-sensitive). - AND filter with user_id + agent_id returns empty: Entities are stored separately. Use
ORinstead, or query separately. - Duplicate memories: Don't mix
infer=True(default) andinfer=Falsefor the same data. Stick to one mode. - Wrong import: Always use
from mem0 import MemoryClient(orAsyncMemoryClientfor async). Do not usefrom mem0 import Memory. - Immutable memories: Cannot be updated or deleted once created. Use
client.history(memory_id)to track changes over time.
Live documentation search
For the latest docs beyond what's in the references, use the doc search tool:
python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --query "topic"
python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --page "/platform/features/graph-memory"
python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --index
No API key needed — searches docs.mem0.ai directly.
Client SDK References
Language-specific deep references (Platform + OSS):
| Language | File |
|---|---|
| Python (MemoryClient + AsyncMemoryClient + Memory OSS) | client/python.md |
| TypeScript/Node.js (MemoryClient + Memory OSS) | client/node.md |
| Python vs TypeScript differences | client/differences.md |
Platform References
Load these on demand for deeper detail:
| Topic | File |
|---|---|
| Quickstart (Python, TS, cURL) | references/quickstart.md |
| SDK guide (all methods, both languages) | references/sdk-guide.md |
| API reference (endpoints, filters, object schema) | references/api-reference.md |
| Architecture (pipeline, lifecycle, scoping, performance) | references/architecture.md |
| Platform features (retrieval, graph, categories, MCP, etc.) | references/features.md |
| Framework integrations (LangChain, CrewAI, OpenAI Agents, etc.) | references/integration-patterns.md |
| Use cases & examples (real-world patterns with code) | references/use-cases.md |
Related Mem0 Skills
| Skill | When to use | Link |
|---|---|---|
| mem0-cli | Terminal commands, scripting, CI/CD, agent tool loops | local / GitHub |
| mem0-vercel-ai-sdk | Vercel AI SDK provider with automatic memory | local / GitHub |
Ratings
4.5★★★★★10 reviews- ★★★★★Shikha Mishra· Oct 10, 2024
mem0 is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Piyush G· Sep 9, 2024
Keeps context tight: mem0 is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Chaitanya Patil· Aug 8, 2024
Registry listing for mem0 matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Sakshi Patil· Jul 7, 2024
mem0 reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Ganesh Mohane· Jun 6, 2024
I recommend mem0 for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Oshnikdeep· May 5, 2024
Useful defaults in mem0 — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Dhruvi Jain· Apr 4, 2024
mem0 has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Rahul Santra· Mar 3, 2024
Solid pick for teams standardizing on skills: mem0 is focused, and the summary matches what you get after install.
- ★★★★★Pratham Ware· Feb 2, 2024
We added mem0 from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Yash Thakker· Jan 1, 2024
mem0 fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.