deep-research

affaan-m/everything-claude-code · updated Apr 8, 2026

$npx skills add https://github.com/affaan-m/everything-claude-code --skill deep-research
0 commentsdiscussion
summary

Produce thorough, cited research reports from multiple web sources using firecrawl and exa MCP tools.

skill.md

Deep Research

Produce thorough, cited research reports from multiple web sources using firecrawl and exa MCP tools.

When to Activate

  • User asks to research any topic in depth
  • Competitive analysis, technology evaluation, or market sizing
  • Due diligence on companies, investors, or technologies
  • Any question requiring synthesis from multiple sources
  • User says "research", "deep dive", "investigate", or "what's the current state of"

MCP Requirements

At least one of:

  • firecrawlfirecrawl_search, firecrawl_scrape, firecrawl_crawl
  • exaweb_search_exa, web_search_advanced_exa, crawling_exa

Both together give the best coverage. Configure in ~/.claude.json or ~/.codex/config.toml.

Workflow

Step 1: Understand the Goal

Ask 1-2 quick clarifying questions:

  • "What's your goal — learning, making a decision, or writing something?"
  • "Any specific angle or depth you want?"

If the user says "just research it" — skip ahead with reasonable defaults.

Step 2: Plan the Research

Break the topic into 3-5 research sub-questions. Example:

  • Topic: "Impact of AI on healthcare"
    • What are the main AI applications in healthcare today?
    • What clinical outcomes have been measured?
    • What are the regulatory challenges?
    • What companies are leading this space?
    • What's the market size and growth trajectory?

Step 3: Execute Multi-Source Search

For EACH sub-question, search using available MCP tools:

With firecrawl:

firecrawl_search(query: "<sub-question keywords>", limit: 8)

With exa:

web_search_exa(query: "<sub-question keywords>", numResults: 8)
web_search_advanced_exa(query: "<keywords>", numResults: 5, startPublishedDate: "2025-01-01")

Search strategy:

  • Use 2-3 different keyword variations per sub-question
  • Mix general and news-focused queries
  • Aim for 15-30 unique sources total
  • Prioritize: academic, official, reputable news > blogs > forums

Step 4: Deep-Read Key Sources

For the most promising URLs, fetch full content:

With firecrawl:

firecrawl_scrape(url: "<url>")

With exa:

crawling_exa(url: "<url>", tokensNum: 5000)

Read 3-5 key sources in full for depth. Do not rely only on search snippets.

Step 5: Synthesize and Write Report

Structure the report:

# [Topic]: Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*

## Executive Summary
[3-5 sentence overview of key findings]

## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))

## 2. [Second Major Theme]
...

## 3. [Third Major Theme]
...

## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]

## Sources
1. [Title](url) — [one-line summary]
2. ...

## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]

Step 6: Deliver

  • Short topics: Post the full report in chat
  • Long reports: Post the executive summary + key takeaways, save full report to a file

Parallel Research with Subagents

For broad topics, use Claude Code's Task tool to parallelize:

Launch 3 research agents in parallel:
1. Agent 1: Research sub-questions 1-2
2. Agent 2: Research sub-questions 3-4
3. Agent 3: Research sub-question 5 + cross-cutting themes

Each agent searches, reads sources, and returns findings. The main session synthesizes into the final report.

Quality Rules

  1. Every claim needs a source. No unsourced assertions.
  2. Cross-reference. If only one source says it, flag it as unverified.
  3. Recency matters. Prefer sources from the last 12 months.
  4. Acknowledge gaps. If you couldn't find good info on a sub-question, say so.
  5. No hallucination. If you don't know, say "insufficient data found."
  6. Separate fact from inference. Label estimates, projections, and opinions clearly.

Examples

"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"
"Investigate the competitive landscape for AI code editors"

Discussion

Product Hunt–style comments (not star reviews)
  • No comments yet — start the thread.
general reviews

Ratings

4.638 reviews
  • Mia Ndlovu· Dec 16, 2024

    deep-research reduced setup friction for our internal harness; good balance of opinion and flexibility.

  • Xiao Kapoor· Nov 7, 2024

    We added deep-research from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • William Reddy· Oct 26, 2024

    Keeps context tight: deep-research is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Xiao Gonzalez· Sep 21, 2024

    We added deep-research from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Oshnikdeep· Sep 9, 2024

    Keeps context tight: deep-research is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Ama Mehta· Sep 9, 2024

    deep-research reduced setup friction for our internal harness; good balance of opinion and flexibility.

  • Ira Ramirez· Sep 5, 2024

    deep-research has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Ganesh Mohane· Aug 28, 2024

    We added deep-research from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Meera Li· Aug 28, 2024

    deep-research is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Kabir Garcia· Aug 24, 2024

    deep-research fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

showing 1-10 of 38

1 / 4