nefesh-mcp-server▌

by nefesh-ai
Real-time human state awareness for AI agents via Model Context Protocol (MCP)
Provides AI agents with real-time awareness of human physiological and emotional state through biometric data analysis, including closed-loop feedback on adaptation effectiveness.
github stars
★ 0
best for
- / General purpose MCP workflows
capabilities
- / request_api_key
- / check_api_key_status
- / get_human_state
- / ingest
- / get_trigger_memory
- / get_session_history
what it does
Provides AI agents with real-time awareness of human physiological and emotional state through biometric data analysis, including closed-loop feedback on adaptation effectiveness.
about
nefesh-mcp-server is a community-built MCP server published by nefesh-ai that provides AI assistants with tools and capabilities via the Model Context Protocol. Real-time human state awareness for AI agents via Model Context Protocol (MCP) It is categorized under ai ml. This server exposes 6 tools that AI clients can invoke during conversations and coding sessions.
how to install
You can install nefesh-mcp-server in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
license
MIT
nefesh-mcp-server is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
readme
Nefesh MCP + A2A Server
A Model Context Protocol and Agent-to-Agent (A2A) server that gives AI agents real-time awareness of human physiological state.
What it does
Send sensor data (heart rate, voice, facial expression, text sentiment), get back a unified state with a machine-readable action your agent can follow directly. Zero prompt engineering required.
On the 2nd+ call, the response includes adaptation_effectiveness — telling your agent whether its previous approach actually worked. A closed-loop feedback system for self-improving agents.
Adaptation Effectiveness (Closed-Loop)
Most APIs give you a state. Nefesh tells you whether your reaction to that state actually worked.
On the 2nd+ call within a session, every response includes:
{
"state": "focused",
"stress_score": 45,
"suggested_action": "simplify_and_focus",
"adaptation_effectiveness": {
"previous_action": "de-escalate_and_shorten",
"previous_score": 68,
"current_score": 45,
"stress_delta": -23,
"effective": true
}
}
Your agent can read effective: true and know its previous de-escalation worked. If effective: false, the agent adjusts its strategy. No other human-state system provides this feedback loop.
Setup
Option A: Connect first, get a key through your agent (fastest)
Add the config without an API key — your agent will get one automatically.
{
"mcpServers": {
"nefesh": {
"url": "https://mcp.nefesh.ai/mcp"
}
}
}
Then ask your agent:
"Connect to Nefesh and get me a free API key for name@example.com"
The agent calls request_api_key → you click one email link → the agent picks up the key. No signup form, no manual copy-paste. After that, add the key to your config for future sessions:
{
"mcpServers": {
"nefesh": {
"url": "https://mcp.nefesh.ai/mcp",
"headers": {
"X-Nefesh-Key": "nfsh_free_..."
}
}
}
}
Option B: Get a key first, then connect
Sign up at nefesh.ai/signup (1,000 calls/month, no credit card), then add the config with your key:
{
"mcpServers": {
"nefesh": {
"url": "https://mcp.nefesh.ai/mcp",
"headers": {
"X-Nefesh-Key": "YOUR_API_KEY"
}
}
}
}
Agent-specific config files
| Agent | Config file |
|---|---|
| Cursor | ~/.cursor/mcp.json |
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Claude Code | .mcp.json (project root) |
| VS Code (Copilot) | .vscode/mcp.json or ~/Library/Application Support/Code/User/mcp.json |
| Cline | cline_mcp_settings.json (via UI: "Configure MCP Servers") |
| Continue.dev | .continue/config.yaml |
| Roo Code | .roo/mcp.json |
| Kiro (Amazon) | ~/.kiro/mcp.json |
| OpenClaw | ~/.config/openclaw/mcp.json |
| JetBrains IDEs | Settings > Tools > MCP Server |
| Zed | ~/.config/zed/settings.json (uses context_servers) |
| OpenAI Codex CLI | ~/.codex/config.toml |
| Goose CLI | ~/.config/goose/config.yaml |
| ChatGPT Desktop | Settings > Apps > Add MCP Server (UI) |
| Gemini CLI | Settings (UI) |
| Augment | Settings Panel (UI) |
| Replit | Integrations Page (web UI) |
| LibreChat | librechat.yaml (self-hosted) |
{
"servers": {
"nefesh": {
"type": "http",
"url": "https://mcp.nefesh.ai/mcp",
"headers": {
"X-Nefesh-Key": "<YOUR_API_KEY>"
}
}
}
}
</details>
<details>
<summary><strong>Zed</strong> — uses <code>context_servers</code> in settings.json</summary>
{
"context_servers": {
"nefesh": {
"settings": {
"url": "https://mcp.nefesh.ai/mcp",
"headers": {
"X-Nefesh-Key": "<YOUR_API_KEY>"
}
}
}
}
}
</details>
<details>
<summary><strong>OpenAI Codex CLI</strong> — uses TOML in <code>~/.codex/config.toml</code></summary>
[mcp_servers.nefesh]
url = "https://mcp.nefesh.ai/mcp"
</details>
<details>
<summary><strong>Continue.dev</strong> — uses YAML in <code>.continue/config.yaml</code></summary>
mcpServers:
- name: nefesh
type: streamable-http
url: https://mcp.nefesh.ai/mcp
</details>
All agents connect via Streamable HTTP — no local installation required.
A2A Integration (Agent-to-Agent Protocol v1.0)
Nefesh is also available as an A2A-compatible agent. While MCP handles tool-calling (your agent calls Nefesh), A2A enables agent-collaboration — other AI agents can communicate with Nefesh as a peer.
Agent Card: /.well-known/agent-card.json
A2A Endpoint: POST https://mcp.nefesh.ai/a2a (JSON-RPC 2.0)
| A2A Skill | Description |
|---|---|
get-human-state | Stress state (0-100), suggested_action, adaptation_effectiveness |
ingest-signals | Send biometric signals, receive unified state |
get-trigger-memory | Psychological trigger profile (active vs resolved) |
get-session-history | Timestamped history with trend |
Same authentication as MCP — X-Nefesh-Key header or Authorization: Bearer token. Free tier works on both protocols.
MCP Tools
| Tool | Auth | Description |
|---|---|---|
request_api_key | No | Request a free API key by email. Poll with check_api_key_status until ready. |
check_api_key_status | No | Poll for API key activation. Returns pending or ready with API key. |
get_human_state | Yes | Get stress state (0-100), suggested_action (maintain/simplify/de-escalate/pause), and adaptation_effectiveness — a closed-loop showing whether your previous action reduced stress. |
ingest | Yes | Send biometric signals (heart rate, HRV, voice tone, expression, sentiment, 30+ fields) and get unified state back. Include subject_id for trigger memory. |
get_trigger_memory | Yes | Get psychological trigger profile — which topics cause stress (active) and which have been resolved over time. |
get_session_history | Yes | Get timestamped state history with trend (rising/falling/stable). |
How self-provisioning works
Your AI agent can get a free API key autonomously. You only click one email link.
- Agent calls
request_api_key(email)— no API key needed for this call - You receive a verification email and click the link
- Agent polls
check_api_key_status(request_id)every 10 seconds - Once verified, the agent receives the API key and can use all other tools
Free tier: 1,000 calls/month, all signal types, 10 req/min. No credit card.
Quick test
After adding the config, ask your AI agent:
"What tools do you have from Nefesh?"
It should list the 6 tools above.
Pricing
| Plan | Price | API Calls |
|---|---|---|
| Free | $0 | 1,000/month, no credit card |
| Solo | $25/month | 50,000/month |
| Enterprise | Custom | Custom SLA |
CLI Alternative
Prefer the terminal over MCP? Use the Nefesh CLI (10-32x lower token cost than MCP for AI agents):
npm install -g @nefesh/cli
nefesh ingest --session test --heart-rate 72 --tone calm
nefesh state test --json
GitHub: nefesh-ai/nefesh-cli
Documentation
Privacy
- No video or audio uploads — edge processing runs client-side
- No PII stored
- GDPR/BIPA compliant — cascading deletion via
delete_subject - Not a medical device — for contextual AI adaptation only
License
MIT — see LICENSE.
FAQ
- What is the nefesh-mcp-server MCP server?
- nefesh-mcp-server is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
- How do MCP servers relate to agent skills?
- Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
- How are reviews shown for nefesh-mcp-server?
- This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
Ratings
4.5★★★★★10 reviews- ★★★★★Shikha Mishra· Oct 10, 2024
nefesh-mcp-server is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.
- ★★★★★Piyush G· Sep 9, 2024
We evaluated nefesh-mcp-server against two servers with overlapping tools; this profile had the clearer scope statement.
- ★★★★★Chaitanya Patil· Aug 8, 2024
Useful MCP listing: nefesh-mcp-server is the kind of server we cite when onboarding engineers to host + tool permissions.
- ★★★★★Sakshi Patil· Jul 7, 2024
nefesh-mcp-server reduced integration guesswork — categories and install configs on the listing matched the upstream repo.
- ★★★★★Ganesh Mohane· Jun 6, 2024
I recommend nefesh-mcp-server for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.
- ★★★★★Oshnikdeep· May 5, 2024
Strong directory entry: nefesh-mcp-server surfaces stars and publisher context so we could sanity-check maintenance before adopting.
- ★★★★★Dhruvi Jain· Apr 4, 2024
nefesh-mcp-server has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.
- ★★★★★Rahul Santra· Mar 3, 2024
According to our notes, nefesh-mcp-server benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.
- ★★★★★Pratham Ware· Feb 2, 2024
We wired nefesh-mcp-server into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.
- ★★★★★Yash Thakker· Jan 1, 2024
nefesh-mcp-server is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.