tag

firecrawl

17 indexed skills · max 10 per page

skills (17)

firecrawl-map

firecrawl/cli · Productivity

0

Discover and filter URLs on a website, with optional search to locate specific pages. \n \n Supports filtering by search query to find pages matching keywords within large sites \n Includes sitemap handling strategies (include, skip, or use only) and optional subdomain inclusion \n Outputs results as plain text or JSON with configurable URL limits \n Commonly paired with firecrawl-scrape: use map with search to find the target URL, then scrape it \n

firecrawl

vm0-ai/vm0-skills · Productivity

0

Use the Firecrawl API via direct curl calls to scrape websites and extract data for AI.

firecrawl-agent

firecrawl/cli · Productivity

0

AI-powered autonomous extraction of structured data from complex multi-page websites. \n \n Navigates sites intelligently to locate and extract data, returning results as JSON with optional schema validation \n Supports custom JSON schemas for predictable structured output, or freeform extraction when schema is not provided \n Offers two model tiers (spark-1-mini and spark-1-pro) with credit limits and optional waiting for inline results \n Best suited for multi-page extraction tasks; use simple

firecrawl

firecrawl/cli · Productivity

0

Web scraping, search, crawling, and browser automation with LLM-optimized markdown output. \n \n Supports six command modes: search for discovery, scrape for single URLs, map to locate subpages, crawl for bulk site sections, browser for interactive content, and download for offline archives \n Returns clean markdown formatted for LLM context windows; write results to .firecrawl/ directory to avoid redundant fetches and manage large outputs \n Includes escalation workflow: start with search or sc

firecrawl-scrape

firecrawl/cli · Productivity

0

Extract clean markdown from any URL, including JavaScript-rendered single-page applications. \n \n Handles both static pages and JS-rendered SPAs with configurable wait times for rendering \n Supports multiple concurrent URL scraping with output format options including markdown, HTML, links, and screenshots \n Includes content filtering options like main-content-only mode to strip navigation and footers, plus tag inclusion/exclusion \n Optional inline question answering via --query flag for tar

firecrawl-search

firecrawl/cli · Productivity

0

Web search with optional full-page content extraction from results. \n \n Returns real search results as JSON with optional --scrape flag to fetch complete page markdown for each result, avoiding redundant fetches \n Supports filtering by source type (web, images, news), category (GitHub, research, PDF), time range (past hour/day/week/month/year), location, and country \n Use --limit to control result count and --scrape-formats to customize output formats when extracting full content \n Part of

firecrawl-crawl

firecrawl/cli · Productivity

0

Bulk extract content from entire websites or site sections with depth and path filtering. \n \n Crawls pages following links up to configurable depth limits and page counts, with path inclusion/exclusion filters to scope extraction \n Supports async job polling or synchronous waiting with progress display via --wait and --progress flags \n Offers concurrency control, request delays, and JSON output formatting for integration into agent workflows \n Part of a four-step escalation pattern: search

prevpage 2 / 2next