Ask Human▌

by masony817
Ask Human adds human-in-the-loop responses to AI, preventing errors on sensitive tasks like passwords and API endpoints.
Enables escalation of questions to humans through a markdown file-based workflow that prevents hallucinations by providing direct human-in-the-loop responses for critical decisions like database passwords, API endpoints, or architectural choices.
best for
- / AI agents needing clarification on API endpoints or credentials
- / Preventing costly mistakes from AI false confidence
- / Code generation requiring human domain knowledge
- / Critical decisions that need human oversight
capabilities
- / Ask humans questions when AI is uncertain
- / Log questions with context in markdown files
- / Watch for human responses and continue processing
- / Track question history with timestamps
- / Handle multiple concurrent questions
- / Prevent hallucinations through human verification
what it does
Creates a markdown file-based workflow where AI can escalate questions to humans instead of making incorrect assumptions or hallucinating answers.
about
Ask Human is a community-built MCP server published by masony817 that provides AI assistants with tools and capabilities via the Model Context Protocol. Ask Human adds human-in-the-loop responses to AI, preventing errors on sensitive tasks like passwords and API endpoints. It is categorized under productivity, developer tools.
how to install
You can install Ask Human in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
license
MIT
Ask Human is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
readme
ask-human mcp 🧑💻🤝🤖
stop your ai from hallucinating. gives it an escape route when confused instead of false confidence.
the pain
ai blurts out an endpoint that never existed
the agent makes assumptions that are simply not true and has false confidence
repeat x100 errors and your day is spent debugging false confidence and issues when you could simply ask a question
the fix
an mcp server that lets the agent raise its hand instead of hallucinating. feels like mentoring a sharp intern who actually asks before guessing.
agent → ask_human()
⬇
question lands in ask_human.md
⬇
you swap "PENDING" for the answer
⬇
agent keeps coding
sample file:
### Q8c4f1e2a
ts: 2025-01-15 14:30
q: which auth endpoint do we use?
ctx: building login form in auth.js
answer: PENDING
you drop:
answer: POST /api/v2/auth/login
boom. flow continues and hopefully the issues are solved.
why it's good
- pip install ask-human-mcp → done
- zero config, cross-platform
- watches the file, instant feedback
- multiple agents, no sweat
- locks + limits so nothing catches fire
- full q&a history in markdown (nice paper-trail for debugging)
30-sec setup
pip install ask-human-mcp
ask-human-mcp
.cursor/mcp.json:
{
"mcpServers": {
"ask-human": { "command": "ask-human-mcp" }
}
}
restart cursor and vibe.
how it works
- ai gets stuck → calls
ask_human(question, context) - question logged → appears in
ask_human.mdwith unique ID - human answers → replace "PENDING" with your response
- ai continues → uses your answer to proceed
the ai receives your answer and keeps coding!
config options (if you want them)
command line
ask-human-mcp --help
ask-human-mcp --port 3000 --host 0.0.0.0 # http mode
ask-human-mcp --timeout 1800 # 30min timeout
ask-human-mcp --file custom_qa.md # custom q&a file
ask-human-mcp --max-pending 50 # max concurrent questions
ask-human-mcp --max-question-length 5000 # max question size
ask-human-mcp --rotation-size 10485760 # rotate file at 10mb
different clients
cursor (local):
{
"mcpServers": {
"ask-human": {
"command": "ask-human-mcp",
"args": ["--timeout", "900"]
}
}
}
cursor (http):
{
"mcpServers": {
"ask-human": {
"url": "http://localhost:3000/sse"
}
}
}
claude desktop:
{
"mcpServers": {
"ask-human": {
"command": "ask-human-mcp"
}
}
}
what's in the box
- zero configuration → works out of the box
- file watching → instant response when you save answers
- timeout handling → questions don't hang forever
- concurrent questions → handle multiple ai agents
- persistent logging → full q&a history in markdown
- cross-platform → windows, macos, linux
- mcp standard → works with any mcp client
- input validation → size limits and sanitization
- file rotation → automatic archiving of large files
- resource limits → prevent dos and memory leaks
- robust parsing → handles malformed markdown gracefully
security stuff
- input sanitization → removes control characters and validates sizes
- file locking → prevents corruption from concurrent access
- secure permissions → files created with restricted access
- resource limits → prevents memory exhaustion and dos attacks
- path validation → ensures files are written to safe locations
limits (so nothing breaks)
| thing | default | what it does |
|---|---|---|
| question length | 10kb | max characters per question |
| context length | 50kb | max characters per context |
| pending questions | 100 | max concurrent questions |
| file size | 100mb | max ask file size |
| rotation size | 50mb | size at which files are archived |
platform support
- windows → full support with native file locking
- macos → full support with fsevents file watching
- linux → full support with inotify file watching
api stuff
ask_human(question, context="")
ask the human a question and wait for response.
answer = await ask_human(
"what database should i use for this project?",
"building a chat app with 1000+ concurrent users"
)
other tools
list_pending_questions()→ get questions waiting for answersget_qa_stats()→ get stats about the q&a session
development
from source
git clone https://github.com/masonyarbrough/ask-human-mcp.git
cd ask-human-mcp
pip install -e ".[dev]"
ask-human-mcp
tests
pytest tests/ -v
code quality
black ask_human_mcp tests
ruff check ask_human_mcp tests
mypy ask_human_mcp
contributing
would love any contributors
issues
use the github issue tracker to report bugs or request features.
you can also just email me: mason@kallro.com
include:
- python version
- operating system
- mcp client (cursor, claude desktop, etc.)
- error messages or logs
- steps to reproduce
changelog
see CHANGELOG.md for version history.
license
mit license - see LICENSE file for details.
thanks
- model context protocol for the excellent standard
- anthropic for claude and mcp support
- cursor for mcp integration
- all contributors and users providing feedback
FAQ
- What is the Ask Human MCP server?
- Ask Human is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
- How do MCP servers relate to agent skills?
- Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
- How are reviews shown for Ask Human?
- This profile displays 46 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.6 out of 5—verify behavior in your own environment before production use.
Ratings
4.6★★★★★46 reviews- ★★★★★Sakura Flores· Dec 24, 2024
I recommend Ask Human for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.
- ★★★★★Sakura Sanchez· Dec 8, 2024
Ask Human reduced integration guesswork — categories and install configs on the listing matched the upstream repo.
- ★★★★★Harper Martin· Nov 15, 2024
We evaluated Ask Human against two servers with overlapping tools; this profile had the clearer scope statement.
- ★★★★★Ren Sethi· Oct 6, 2024
Ask Human is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.
- ★★★★★Ava Rao· Sep 25, 2024
We evaluated Ask Human against two servers with overlapping tools; this profile had the clearer scope statement.
- ★★★★★Oshnikdeep· Sep 9, 2024
I recommend Ask Human for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.
- ★★★★★Jin Abbas· Sep 9, 2024
I recommend Ask Human for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.
- ★★★★★Harper Harris· Sep 9, 2024
Ask Human is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.
- ★★★★★Ganesh Mohane· Aug 28, 2024
Strong directory entry: Ask Human surfaces stars and publisher context so we could sanity-check maintenance before adopting.
- ★★★★★Luis Khanna· Aug 28, 2024
Strong directory entry: Ask Human surfaces stars and publisher context so we could sanity-check maintenance before adopting.
showing 1-10 of 46