Mnemex▌

by mnemexai
Mnemex gives AI assistants human-like memory that fades over time, mirroring the Ebbinghaus forgetting curve for natural
Human-like temporal memory for AI assistants that naturally fades over time unless reinforced through use, mimicking the Ebbinghaus forgetting curve
best for
- / AI assistants needing realistic memory behavior
- / Chatbots that should remember recent conversations better
- / Research on human-like AI cognition
- / Applications requiring natural forgetting patterns
capabilities
- / Store memories with automatic decay over time
- / Reinforce memories through repeated access
- / Retrieve memories based on strength and recency
- / Search stored memories by content
- / Track memory usage patterns
- / Manage temporal memory dynamics
what it does
Gives AI assistants human-like memory that naturally fades over time unless reinforced through use, following the Ebbinghaus forgetting curve. Store and retrieve memories that become weaker or stronger based on usage patterns.
about
Mnemex is a community-built MCP server published by mnemexai that provides AI assistants with tools and capabilities via the Model Context Protocol. Mnemex gives AI assistants human-like memory that fades over time, mirroring the Ebbinghaus forgetting curve for natural It is categorized under ai ml.
how to install
You can install Mnemex in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
license
AGPL-3.0
Mnemex is released under the AGPL-3.0 license.
readme
CortexGraph: Temporal Memory for AI
<!-- mcp-name: io.github.prefrontal-systems/cortexgraph -->A Model Context Protocol (MCP) server providing human-like memory dynamics for AI assistants. Memories naturally fade over time unless reinforced through use, mimicking the Ebbinghaus forgetting curve.
[!NOTE] About the Name & Version
This project was originally developed as mnemex (published to PyPI up to v0.6.0). In November 2025, it was transferred to Prefrontal Systems and renamed to CortexGraph to better reflect its role within a broader cognitive architecture for AI systems.
Version numbering starts at 0.1.0 for the cortexgraph package to signal a fresh start under the new name, while acknowledging the mature, well-tested codebase (791 tests, 98%+ coverage) inherited from mnemex. The mnemex package remains frozen at v0.6.0 on PyPI.
This versioning approach:
- Signals "new package" to PyPI users discovering cortexgraph
- Gives room to evolve the brand, API, and organizational integration before 1.0
- Maintains continuity: users can migrate from
pip install mnemex→pip install cortexgraph- Reflects that while the code is mature, the cortexgraph identity is just beginning
[!IMPORTANT] 🔬 RESEARCH ARTIFACT - NOT FOR PRODUCTION
This software is a Proof of Concept (PoC) and reference implementation for research purposes. It exists to validate theoretical frameworks in cognitive architecture and AI safety (specifically the STOPPER Protocol and CortexGraph).
It is NOT a commercial product. It is not maintained for general production use, may contain breaking changes, and offers no guarantees of stability or support. Use it to study the concepts, but build your own production implementations.
📖 New to this project? Start with the ELI5 Guide for a simple explanation of what this does and how to use it.
What is CortexGraph?
CortexGraph gives AI assistants like Claude a human-like memory system.
The Problem
When you chat with Claude, it forgets everything between conversations. You tell it "I prefer TypeScript" or "I'm allergic to peanuts," and three days later, you have to repeat yourself. This is frustrating and wastes time.
What CortexGraph Does
CortexGraph makes AI assistants remember things naturally, just like human memory:
- 🧠 Remembers what matters - Your preferences, decisions, and important facts
- ⏰ Forgets naturally - Old, unused information fades away over time (like the Ebbinghaus forgetting curve)
- 💪 Gets stronger with use - The more you reference something, the longer it's remembered
- 📦 Saves important things permanently - Frequently used memories get promoted to long-term storage
How It Works (Simple Version)
- You talk naturally - "I prefer dark mode in all my apps"
- Memory is saved automatically - No special commands needed
- Time passes - Memory gradually fades if not used
- You reference it again - "Make this app dark mode"
- Memory gets stronger - Now it lasts even longer
- Important memories promoted - Used 5+ times? Saved permanently to your Obsidian vault
No flashcards. No explicit review. Just natural conversation.
Why It's Different
Most memory systems are dumb:
- ❌ "Delete after 7 days" (doesn't care if you used it 100 times)
- ❌ "Keep last 100 items" (throws away important stuff just because it's old)
CortexGraph is smart:
- ✅ Combines recency (when?), frequency (how often?), and importance (how critical?)
- ✅ Memories fade naturally like human memory
- ✅ Frequently used memories stick around longer
- ✅ You can mark critical things to "never forget"
Technical Overview
This repository contains research, design, and a complete implementation of a short-term memory system that combines:
- Novel temporal decay algorithm based on cognitive science
- Reinforcement learning through usage patterns
- Two-layer architecture (STM + LTM) for working and permanent memory
- Smart prompting patterns for natural LLM integration
- Git-friendly storage with human-readable JSONL
- Knowledge graph with entities and relations
Module Organization
CortexGraph follows a modular architecture:
cortexgraph.core: Foundational algorithms (decay, similarity, clustering, consolidation, search validation)cortexgraph.agents: Multi-agent consolidation pipeline and storage utilitiescortexgraph.storage: JSONL and SQLite storage backends with batch operationscortexgraph.tools: MCP tool implementations
Why CortexGraph?
🔒 Privacy & Transparency
All data stored locally on your machine - no cloud services, no tracking, no data sharing.
-
Short-term memory:
- JSONL (default): Human-readable, git-friendly files (
~/.config/cortexgraph/jsonl/) - SQLite: Robust database storage for larger datasets (
~/.config/cortexgraph/cortexgraph.db)
- JSONL (default): Human-readable, git-friendly files (
-
Long-term memory: Markdown files optimized for Obsidian
- YAML frontmatter with metadata
- Wikilinks for connections
- Permanent storage you control
-
Export: Built-in utility to export memories to Markdown for portability.
You own your data. You can read it, edit it, delete it, or version control it - all without any special tools.
Core Algorithm
The temporal decay scoring function:
$$ \Large ext{score}(t) = (n_{ ext{use}})^\beta \cdot e^{-\lambda \cdot \Delta t} \cdot s $$
Where:
- $\large n_{ ext{use}}$ - Use count (number of accesses)
- $\large \beta$ (beta) - Sub-linear use count weighting (default: 0.6)
- $\large \lambda = \frac{\ln(2)}{t_{1/2}}$ (lambda) - Decay constant; set via half-life (default: 3-day)
- $\large \Delta t$ - Time since last access (seconds)
- $\large s$ - Strength parameter $\in [0, 2]$ (importance multiplier)
Thresholds:
- $\large au_{ ext{forget}}$ (default 0.05) — if score < this, forget
- $\large au_{ ext{promote}}$ (default 0.65) — if score ≥ this, promote (or if $\large n_{ ext{use}}\ge5$ in 14 days)
Decay Models:
- Power‑Law (default): heavier tail; most human‑like retention
- Exponential: lighter tail; forgets sooner
- Two‑Component: fast early forgetting + heavier tail
See detailed parameter reference, model selection, and worked examples in docs/scoring_algorithm.md.
Tuning Cheat Sheet
- Balanced (default)
- Half-life: 3 days (λ ≈ 2.67e-6)
- β = 0.6, τ_forget = 0.05, τ_promote = 0.65, use_count≥5 in 14d
- Strength: 1.0 (bump to 1.3–2.0 for critical)
- High‑velocity context (ephemeral notes, rapid switching)
- Half-life: 12–24 hours (λ ≈ 1.60e-5 to 8.02e-6)
- β = 0.8–0.9, τ_forget = 0.10–0.15, τ_promote = 0.70–0.75
- Long retention (research/archival)
- Half-life: 7–14 days (λ ≈ 1.15e-6 to 5.73e-7)
- β = 0.3–0.5, τ_forget = 0.02–0.05, τ_promote = 0.50–0.60
- Preference/decision heavy assistants
- Half-life: 3–7 days; β = 0.6–0.8
- Strength defaults: 1.3–1.5 for preferences; 1.8–2.0 for decisions
- Aggressive space control
- Raise τ_forget to 0.08–0.12 and/or shorten half-life; schedule weekly GC
- Environment template
- CORTEXGRAPH_DECAY_LAMBDA=2.673e-6, CORTEXGRAPH_DECAY_BETA=0.6
- CORTEXGRAPH_FORGET_THRESHOLD=0.05, CORTEXGRAPH_PROMOTE_THRESHOLD=0.65
- CORTEXGRAPH_PROMOTE_USE_COUNT=5, CORTEXGRAPH_PROMOTE_TIME_WINDOW=14
Decision thresholds:
- Forget: $ ext{score} < 0.05$ → delete memory
- Promote: $ ext{score} \geq 0.65$ OR $n_{ ext{use}} \geq 5$ within 14 days → move to LTM
Key Innovations
1. Temporal Decay with Reinforcement
Unlike traditional caching (TTL, LRU), Mnemex scores memories continuously by combining recency (exponential decay), frequency (sub-linear use count), and importance (adjustable strength). See Core Algorithm for the mathematical formula. This creates memory dynamics that closely mimic human cognition.
2. Smart Prompting System + Natural Language Activation (v0.6.0+)
Patterns for making AI assistants use memory naturally, now enhanced with automatic entity extraction and importance scoring:
Auto-Enrichment (NEW in v0.6.0)
When you save memories, CortexGraph automatically:
- Extracts entities (people, technologies, organizations) using spaCy NER
- Calculates importance/strength based on content markers
- Detects save/recall intent from natural language phrases
# Before v0.6.0 - manual entity specification
save_memory(content="Use JWT for auth", entities=["JWT", "auth"])
# v0.6.0+ - automatic extraction
save_memory(content="Use JWT for auth")
# Entities auto-extracted: ["jwt", "auth"]
# Strength auto-calculated based on content
Auto-Save
User: "Remember: I prefer TypeScript over JavaScript"
→ Detected save phrase: "Remember"
→ Automatically saved with:
- Entities
---
FAQ
- What is the Mnemex MCP server?
- Mnemex is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
- How do MCP servers relate to agent skills?
- Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
- How are reviews shown for Mnemex?
- This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
Ratings
4.5★★★★★10 reviews- ★★★★★Shikha Mishra· Oct 10, 2024
Mnemex is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.
- ★★★★★Piyush G· Sep 9, 2024
We evaluated Mnemex against two servers with overlapping tools; this profile had the clearer scope statement.
- ★★★★★Chaitanya Patil· Aug 8, 2024
Useful MCP listing: Mnemex is the kind of server we cite when onboarding engineers to host + tool permissions.
- ★★★★★Sakshi Patil· Jul 7, 2024
Mnemex reduced integration guesswork — categories and install configs on the listing matched the upstream repo.
- ★★★★★Ganesh Mohane· Jun 6, 2024
I recommend Mnemex for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.
- ★★★★★Oshnikdeep· May 5, 2024
Strong directory entry: Mnemex surfaces stars and publisher context so we could sanity-check maintenance before adopting.
- ★★★★★Dhruvi Jain· Apr 4, 2024
Mnemex has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.
- ★★★★★Rahul Santra· Mar 3, 2024
According to our notes, Mnemex benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.
- ★★★★★Pratham Ware· Feb 2, 2024
We wired Mnemex into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.
- ★★★★★Yash Thakker· Jan 1, 2024
Mnemex is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.