search-web

GetWeb

by ivan-mezentsev

GetWeb offers reliable web scraping and content extraction. Scrape any website with advanced internet scraping and filte

Integrates DuckDuckGo, Google Search, Felo AI, and Jina Reader APIs to provide web search, content extraction, and HTML-to-Markdown conversion with caching, user agent rotation, and configurable text filtering for reliable web research and information retrieval.

github stars

13

Multiple search engines in one serverBuilt-in caching and anti-blocking featuresHTML-to-markdown conversion included

best for

  • / Researchers gathering information from multiple sources
  • / Content creators needing web data in markdown format
  • / Developers building search-powered applications
  • / Anyone automating web research workflows

capabilities

  • / Search DuckDuckGo and Google with customizable result counts
  • / Extract and convert web pages to markdown format
  • / Filter and clean extracted text content
  • / Cache search results to reduce API calls
  • / Rotate user agents to avoid blocking

what it does

Searches the web using multiple search engines (DuckDuckGo, Google, Felo AI) and extracts/converts web content to markdown format. Includes caching and user agent rotation for reliable web scraping.

about

GetWeb is a community-built MCP server published by ivan-mezentsev that provides AI assistants with tools and capabilities via the Model Context Protocol. GetWeb offers reliable web scraping and content extraction. Scrape any website with advanced internet scraping and filte It is categorized under search web.

how to install

You can install GetWeb in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

MIT

GetWeb is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

MCP-GetWeb

npm version npm downloads License: MIT GitHub issues

A Model Context Protocol (MCP) server that provides web search and content extraction capabilities.

Quick Start

{
  "mcpServers": {
    "getweb": {
      "command": "npx",
      "args": [
        "mcp-getweb"
      ],
      "type": "stdio",
      "env": {
        "GOOGLE_API_KEY": "XXXXXXXXX",
        "GOOGLE_SEARCH_ENGINE_ID": "XXXXXXXXX",
        "JINA_API_KEY": "jina_XXXXXXXXX",
        "LINKUP_API_KEY": "XXXXXXXXX",
        "EXA_API_KEY": "XXXXXXXXX"
      }
    }
  }
}

Features

1) DuckDuckGo Search (duckduckgo-search)

Search the web using DuckDuckGo with HTML scraping.

Parameters:

  • query (string, required): The search query
  • page (integer, optional): Page number (default: 1, min: 1)
  • numResults (integer, optional): Number of results to return (default: 10, min: 1, max: 20)

2) Google Search (google-search)

Search Google and return relevant results using the Programmable Search Engine.

Parameters:

  • query (string, required): Search query; quotes enable exact matches
  • num_results (integer, optional): Total results to return (default: 5, max: 10)
  • site (string, optional): Restrict to a specific site/domain (e.g., wikipedia.org)
  • language (string, optional): ISO 639-1 language code (e.g., en, es)
  • dateRestrict (string, optional): Date filter, e.g., d7, w4, m6, y1
  • exactTerms (string, optional): Exact phrase that must appear
  • resultType (string, optional): Result type: image|images|news|video|videos
  • page (integer, optional): Page number for pagination (default: 1, min: 1)
  • resultsPerPage (integer, optional): Results per page (default: 5, max: 10)
  • sort (string, optional): Sort order, relevance (default) or date

Note: Requires GOOGLE_API_KEY and GOOGLE_SEARCH_ENGINE_ID to be set.

3) Linkup Search (linkup_search)

Search the web via Linkup API and return relevant results in Markdown.

Parameters:

  • query (string, required): Natural-language search query
  • onlySearchTheseDomains (array of strings, optional): Restrict results to specific domains
  • dateFilter (object, optional): Date range filter
    • fromDate (string, optional): Start date in YYYY-MM-DD
    • toDate (string, optional): End date in YYYY-MM-DD
  • maxResults (integer, optional): Maximum number of results to return (default: 5, min: 1)

Note: Requires LINKUP_API_KEY to be set.

4) Exa Search (exa_search)

Search the web via Exa API and return relevant results in Markdown.

Parameters:

  • query (string, required): Natural-language search query
  • maxResults (integer, optional): Number of results to return (default: 10, max: 25)
  • publishedDateRange (object, optional): Published date range filter
    • fromDate (string, optional): Start date, RFC3339 (e.g., 2024-02-09T00:00:00.000Z) or YYYY-MM-DD
    • toDate (string, optional): End date, RFC3339 (e.g., 2024-02-09T00:00:00.000Z) or YYYY-MM-DD
  • crawlDateRange (object, optional): Crawl date range filter
    • fromDate (string, optional): Start date, RFC3339 or YYYY-MM-DD
    • toDate (string, optional): End date, RFC3339 or YYYY-MM-DD
  • userLocation (string, optional): Two-letter ISO country code (e.g., US)
  • includeText (string, optional): Exact phrase that must appear in the webpage text (max 5 words)
  • excludeText (string, optional): Exact phrase that must not appear in the webpage text (max 5 words)
  • domain (string, optional): Restrict results to a single domain (e.g., arxiv.org)

Note: Requires EXA_API_KEY to be set.

5) Felo AI Search (felo-search)

AI-powered search with contextual responses for up-to-date technical information (releases, advisories, migrations, benchmarks, community insights).

Parameters:

  • query (string, required): The search query or prompt
  • stream (boolean, optional): Whether to stream the response (default: false)

6) URL Content Fetcher (fetch-url)

Fetch the clean content of a URL and return it as text.

Parameters:

  • url (string, required): The URL to fetch
  • maxLength (integer, optional): Maximum content length (default: 30000, min: 1000, max: 500000)
  • extractMainContent (boolean, optional): Attempt to extract main content when HTML (default: true)

7) URL Metadata Extractor (url-metadata)

Extract metadata (title, description, image, favicon) from a URL.

Parameters:

  • url (string, required): The URL to extract metadata from

8) URL Fetch to Markdown (url-fetch)

Fetch web pages and convert them to Markdown. Handles HTML, plaintext, and JSON (pretty-printed in a fenced block).

Parameters:

  • url (string, required): The URL to fetch and convert to Markdown

9) Jina Reader (jina-reader)

Retrieve LLM-friendly content from a URL using Jina r.reader with optional summaries and formats.

Parameters:

  • url (string, required): The URL to fetch and parse
  • maxLength (integer, optional): Maximum output length (default: 10000, min: 1000, max: 50000)
  • withLinksummary (boolean, optional): Include links summary (default: false)
  • withImagesSummary (boolean, optional): Include images summary (default: false)
  • withGeneratedAlt (boolean, optional): Generate alt text for images (default: false)
  • returnFormat (string, optional): markdown (default) | html | text | screenshot | pageshot
  • noCache (boolean, optional): Bypass cache (default: false)
  • timeout (integer, optional): Max seconds to wait (default: 10, min: 5, max: 30)

Note: Requires JINA_API_KEY to be set.

Acknowledgments

  • Model Context Protocol specification by Anthropic
  • DuckDuckGo for providing a privacy-focused web search experience
  • Google Programmable Search Engine and Custom Search JSON API
  • Linkup API for high-quality web search results
  • Exa API for fast neural web search
  • Jina AI r.reader API for high-quality content extraction
  • Felo AI for up-to-date, developer-focused search insights
  • Rust ecosystem and crates that power this server:
    • tokio, reqwest, serde, serde_json, tracing, tracing-subscriber, clap
    • html2text, chardetng, encoding_rs, scraper, html5ever, markup5ever_rcdom, regex, once_cell, futures, async-stream
    • url, uuid, thiserror, tokio-util, rand, urlencoding
  • The broader MCP community for guidance, examples, and discussions

Support

If you encounter any issues or have questions, please open an issue on GitHub.

FAQ

What is the GetWeb MCP server?
GetWeb is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for GetWeb?
This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    GetWeb is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Piyush G· Sep 9, 2024

    We evaluated GetWeb against two servers with overlapping tools; this profile had the clearer scope statement.

  • Chaitanya Patil· Aug 8, 2024

    Useful MCP listing: GetWeb is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Sakshi Patil· Jul 7, 2024

    GetWeb reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Ganesh Mohane· Jun 6, 2024

    I recommend GetWeb for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Oshnikdeep· May 5, 2024

    Strong directory entry: GetWeb surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Dhruvi Jain· Apr 4, 2024

    GetWeb has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

  • Rahul Santra· Mar 3, 2024

    According to our notes, GetWeb benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Pratham Ware· Feb 2, 2024

    We wired GetWeb into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Yash Thakker· Jan 1, 2024

    GetWeb is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.