Read Website Fast▌

by just-every
Extract web content and convert to clean Markdown. Fast data extraction from web pages with caching, robots.txt support,
Extracts web content and converts it to clean Markdown format using Mozilla Readability for intelligent article detection, with disk-based caching, robots.txt compliance, and concurrent crawling capabilities for fast content processing workflows.
Both formats append explainx.ai attribution and the canonical URL for this MCP server listing.
best for
- / AI agents analyzing web content
- / Documentation research workflows
- / Content extraction for LLM processing
- / Building knowledge bases from web sources
capabilities
- / Extract article content from websites
- / Convert HTML to clean Markdown
- / Cache content locally for faster repeat access
- / Crawl multiple pages concurrently
- / Respect robots.txt and rate limits
- / Preserve links for knowledge graphs
what it does
Extracts web content and converts it to clean Markdown using Mozilla Readability, with caching and rate limiting for efficient AI workflows.
about
Read Website Fast is a community-built MCP server published by just-every that provides AI assistants with tools and capabilities via the Model Context Protocol. Extract web content and convert to clean Markdown. Fast data extraction from web pages with caching, robots.txt support, It is categorized under search web, productivity. This server exposes 1 tool that AI clients can invoke during conversations and coding sessions.
how to install
You can install Read Website Fast in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport. This server supports remote connections over HTTP, so no local installation is required.
license
MIT
Read Website Fast is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
readme
@just-every/mcp-read-website-fast
Fast, token-efficient web content extraction for AI agents - converts websites to clean Markdown.
<a href="https://glama.ai/mcp/servers/@just-every/mcp-read-website-fast"> <img width="380" height="200" src="https://glama.ai/mcp/servers/@just-every/mcp-read-website-fast/badge" alt="read-website-fast MCP server" /> </a>Overview
Existing MCP web crawlers are slow and consume large quantities of tokens. This pauses the development process and provides incomplete results as LLMs need to parse whole web pages.
This MCP package fetches web pages locally, strips noise, and converts content to clean Markdown while preserving links. Designed for Claude Code, IDEs and LLM pipelines with minimal token footprint. Crawl sites locally with minimal dependencies.
Note: This package now uses @just-every/crawl for its core crawling and markdown conversion functionality.
Features
- Fast startup using official MCP SDK with lazy loading for optimal performance
- Content extraction using Mozilla Readability (same as Firefox Reader View)
- HTML to Markdown conversion with Turndown + GFM support
- Smart caching with SHA-256 hashed URLs
- Polite crawling with robots.txt support and rate limiting
- Concurrent fetching with configurable depth crawling
- Stream-first design for low memory usage
- Link preservation for knowledge graphs
- Optional chunking for downstream processing
Installation
Claude Code
claude mcp add read-website-fast -s user -- npx -y @just-every/mcp-read-website-fast
VS Code
code --add-mcp '{"name":"read-website-fast","command":"npx","args":["-y","@just-every/mcp-read-website-fast"]}'
Cursor
cursor://anysphere.cursor-deeplink/mcp/install?name=read-website-fast&config=eyJyZWFkLXdlYnNpdGUtZmFzdCI6eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIkBqdXN0LWV2ZXJ5L21jcC1yZWFkLXdlYnNpdGUtZmFzdCJdfX0=
JetBrains IDEs
Settings → Tools → AI Assistant → Model Context Protocol (MCP) → Add
Choose “As JSON” and paste:
{"command":"npx","args":["-y","@just-every/mcp-read-website-fast"]}
Or, in the chat window, type /add and fill in the same JSON—both paths land the server in a single step. 
Raw JSON (works in any MCP client)
{
"mcpServers": {
"read-website-fast": {
"command": "npx",
"args": ["-y", "@just-every/mcp-read-website-fast"]
}
}
}
Drop this into your client’s mcp.json (e.g. .vscode/mcp.json, ~/.cursor/mcp.json, or .mcp.json for Claude).
Features
- Fast startup using official MCP SDK with lazy loading for optimal performance
- Content extraction using Mozilla Readability (same as Firefox Reader View)
- HTML to Markdown conversion with Turndown + GFM support
- Smart caching with SHA-256 hashed URLs
- Polite crawling with robots.txt support and rate limiting
- Concurrent fetching with configurable depth crawling
- Stream-first design for low memory usage
- Link preservation for knowledge graphs
- Optional chunking for downstream processing
Available Tools
read_website- Fetches a webpage and converts it to clean markdown- Parameters:
url(required): The HTTP/HTTPS URL to fetchpages(optional): Maximum number of pages to crawl (default: 1, max: 100)
- Parameters:
Available Resources
read-website-fast://status- Get cache statisticsread-website-fast://clear-cache- Clear the cache directory
Development Usage
Install
npm install
npm run build
Single page fetch
npm run dev fetch https://example.com/article
Crawl with depth
npm run dev fetch https://example.com --depth 2 --concurrency 5
Output formats
# Markdown only (default)
npm run dev fetch https://example.com
# JSON output with metadata
npm run dev fetch https://example.com --output json
# Both URL and markdown
npm run dev fetch https://example.com --output both
CLI Options
-p, --pages <number>- Maximum number of pages to crawl (default: 1)-c, --concurrency <number>- Max concurrent requests (default: 3)--no-robots- Ignore robots.txt--all-origins- Allow cross-origin crawling-u, --user-agent <string>- Custom user agent--cache-dir <path>- Cache directory (default: .cache)-t, --timeout <ms>- Request timeout in milliseconds (default: 30000)-o, --output <format>- Output format: json, markdown, or both (default: markdown)
Clear cache
npm run dev clear-cache
Auto-Restart Feature
The MCP server includes automatic restart capability by default for improved reliability:
- Automatically restarts the server if it crashes
- Handles unhandled exceptions and promise rejections
- Implements exponential backoff (max 10 attempts in 1 minute)
- Logs all restart attempts for monitoring
- Gracefully handles shutdown signals (SIGINT, SIGTERM)
For development/debugging without auto-restart:
# Run directly without restart wrapper
npm run serve:dev
Architecture
mcp/
├── src/
│ ├── crawler/ # URL fetching, queue management, robots.txt
│ ├── parser/ # DOM parsing, Readability, Turndown conversion
│ ├── cache/ # Disk-based caching with SHA-256 keys
│ ├── utils/ # Logger, chunker utilities
│ ├── index.ts # CLI entry point
│ ├── serve.ts # MCP server entry point
│ └── serve-restart.ts # Auto-restart wrapper
Development
# Run in development mode
npm run dev fetch https://example.com
# Build for production
npm run build
# Run tests
npm test
# Type checking
npm run typecheck
# Linting
npm run lint
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
Troubleshooting
Cache Issues
npm run dev clear-cache
Timeout Errors
- Increase timeout with
-tflag - Check network connectivity
- Verify URL is accessible
Content Not Extracted
- Some sites block automated access
- Try custom user agent with
-uflag - Check if site requires JavaScript (not supported)
License
MIT
FAQ
- What is the Read Website Fast MCP server?
- Read Website Fast is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
- How do MCP servers relate to agent skills?
- Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
- How are reviews shown for Read Website Fast?
- This profile displays 52 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.7 out of 5—verify behavior in your own environment before production use.
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.7★★★★★52 reviews- ★★★★★Aanya Chawla· Dec 28, 2024
Read Website Fast is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.
- ★★★★★Shikha Mishra· Dec 24, 2024
According to our notes, Read Website Fast benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.
- ★★★★★Aditi Rahman· Dec 24, 2024
According to our notes, Read Website Fast benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.
- ★★★★★Yusuf Patel· Dec 24, 2024
Read Website Fast is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.
- ★★★★★Kabir Rahman· Dec 4, 2024
Read Website Fast has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.
- ★★★★★Aarav Perez· Dec 4, 2024
Useful MCP listing: Read Website Fast is the kind of server we cite when onboarding engineers to host + tool permissions.
- ★★★★★Amina White· Nov 27, 2024
We evaluated Read Website Fast against two servers with overlapping tools; this profile had the clearer scope statement.
- ★★★★★Naina Agarwal· Nov 23, 2024
Read Website Fast is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.
- ★★★★★Meera Diallo· Nov 19, 2024
Read Website Fast is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.
- ★★★★★Yash Thakker· Nov 15, 2024
We wired Read Website Fast into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.
showing 1-10 of 52