firecrawl-crawl▌
firecrawl/cli · updated Apr 8, 2026
Bulk extract content from entire websites or site sections with depth and path filtering.
- ›Crawls pages following links up to configurable depth limits and page counts, with path inclusion/exclusion filters to scope extraction
- ›Supports async job polling or synchronous waiting with progress display via --wait and --progress flags
- ›Offers concurrency control, request delays, and JSON output formatting for integration into agent workflows
- ›Part of a four-step escalation pattern: search
firecrawl crawl
Bulk extract content from a website. Crawls pages following links up to a depth/limit.
When to use
- You need content from many pages on a site (e.g., all
/docs/) - You want to extract an entire site section
- Step 4 in the workflow escalation pattern: search → scrape → map → crawl → interact
Quick start
# Crawl a docs section
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json
# Full crawl with depth limit
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json
# Check status of a running crawl
firecrawl crawl <job-id>
Options
| Option | Description |
|---|---|
--wait |
Wait for crawl to complete before returning |
--progress |
Show progress while waiting |
--limit <n> |
Max pages to crawl |
--max-depth <n> |
Max link depth to follow |
--include-paths <paths> |
Only crawl URLs matching these paths |
--exclude-paths <paths> |
Skip URLs matching these paths |
--delay <ms> |
Delay between requests |
--max-concurrency <n> |
Max parallel crawl workers |
--pretty |
Pretty print JSON output |
-o, --output <path> |
Output file path |
Tips
- Always use
--waitwhen you need the results immediately. Without it, crawl returns a job ID for async polling. - Use
--include-pathsto scope the crawl — don't crawl an entire site when you only need one section. - Crawl consumes credits per page. Check
firecrawl credit-usagebefore large crawls.
See also
- firecrawl-scrape — scrape individual pages
- firecrawl-map — discover URLs before deciding to crawl
- firecrawl-download — download site to local files (uses map + scrape)
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.6★★★★★37 reviews- ★★★★★Shikha Mishra· Dec 16, 2024
firecrawl-crawl has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Mia Liu· Dec 16, 2024
firecrawl-crawl reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Henry Srinivasan· Nov 7, 2024
Keeps context tight: firecrawl-crawl is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Henry Iyer· Oct 26, 2024
firecrawl-crawl has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Kaira Wang· Oct 26, 2024
Useful defaults in firecrawl-crawl — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Dhruvi Jain· Oct 14, 2024
Registry listing for firecrawl-crawl matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Oshnikdeep· Sep 21, 2024
firecrawl-crawl reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Layla Yang· Sep 9, 2024
Registry listing for firecrawl-crawl matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Layla Park· Sep 5, 2024
Useful defaults in firecrawl-crawl — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Layla Haddad· Aug 28, 2024
firecrawl-crawl fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
showing 1-10 of 37