← Blog
explainx / blog

Claude Code /ultrareview: a cloud “bug-hunting fleet” before you merge (research preview)

Anthropic’s /ultrareview runs a multi-agent code review in a remote sandbox—verified findings, not just nits. Official docs: v2.1.86+, Pro/Max get three free runs through May 5, 2026, then extra usage (~$5–$20). How it differs from /review, when to use it, and how ExplainX thinks about the merge gate.

5 min readExplainX Team
Claude CodeAnthropicCode reviewAI agentsDeveloper tools

Includes frontmatter plus an attribution block so copies credit explainx.ai and the canonical URL.

Claude Code /ultrareview: a cloud “bug-hunting fleet” before you merge (research preview)

In April 2026, Anthropic promoted /ultrareview through developer channels (including the ClaudeDevs account on X). The one-line pitch matches the docs: run a deep, multi-agent code review in the cloud so verified bug candidates land back in the CLI or Desktop while you keep working. This post is a builder-friendly summary of the official behavior, cost, and when the tradeoffs make sense—plus ExplainX’s take on the merge gate.

Source of truth: Find bugs with ultrareview — Claude Code Docs. Features, pricing, and availability are explicitly subject to change; always re-check that page before budgeting or promising behavior to a team.


What /ultrareview does (per Anthropic)

Per the ultrareview documentation:

  1. Cloud + fleet — The command uses Claude Code on the web infrastructure. Multiple reviewer agents explore the branch or PR in parallel in a remote sandbox.
  2. VerificationReported findings are independently reproduced and verified so the output aims at real bugs rather than style suggestions.
  3. No local crunch — The review does not tie up your machine; your session stays free for other work.
  4. User-initiated onlyNothing starts automatically; you explicitly run /ultrareview.

Invocation:

  • /ultrareview — reviews the diff between the current branch and the default branch, including uncommitted and staged work; the CLI bundles repo state for the sandbox.
  • /ultrareview 1234PR mode: the sandbox clones GitHub PR #1234 (requires a github.com remote on the repo). If the repo is too large to bundle locally, the tool prompts you to push a branch and use PR mode instead.

While it runs: docs cite ~5–10 minutes typically, as a background task. Use /tasks to list running and completed reviews, open details, or stop a run (stopping archives the cloud session; partial results are not returned). Findings arrive as notifications with file, line, and explanation so you can ask Claude to fix in-thread.

Setup: claude update (per social posts and docs), and /login with a Claude.ai account if you have been on API-key-only auth.


/review (local) vs /ultrareview (cloud)

The docs include a direct comparison; we repeat it here because the decision is the whole product story:

/review/ultrareview
Runslocally in your sessionremote cloud sandbox
Depthsingle-pass reviewmulti-agent + verification
Timeseconds to a few minutes~5–10 minutes
Costnormal plan usagefree trials (see below), then extra usage
Best forfast iterationsubstantial pre-merge changes

Plain language: Inner loop/review. Before you merge something high-risk (auth, migrations, security-sensitive paths) → consider /ultrareview if the scope justifies time and money.


Pricing, free runs, and extra usage

From the same official table (April 2026 text):

  • Pro and Max: 3 free runs through May 5, 2026; these are a one-time allotment per account, not monthly, and they expire on that date.
  • Team and Enterprise: no free runs in the published table; reviews bill as extra usage.
  • After free runs (or after the period ends), each review is extra usage; docs state a typical range of about $5–$20 depending on the size of the change—treat that as an order-of-magnitude, not a quote.
  • Extra usage must be enabled to launch a paid review; the CLI can block and point to billing or /extra-usage.

GitHub-integrated Code Review (research preview for Team and Enterprise, PR comments from agent teams) is a related but separate product surface from the CLI /ultrareview command; do not conflate the two when planning org rollout.


Not available in these setups

The ultrareview page states the feature is not available when using Claude Code through Amazon Bedrock, Google Cloud Vertex AI, or Microsoft Foundry, and not for organizations that have Zero Data Retention enabled. If your compliance posture requires one of those, assume /ultrareview is off the table until vendor docs say otherwise.


ExplainX: how we’d use (and not mythologize) it

ExplainX teaches agent skills, MCP, and courses—so we care about where automated review fits in a real SDLC:

  1. It’s a gate, not a culture/ultrareview is strong for broad coverage and verified candidates on a big diff; it does not replace domain experts, threat modeling, or regulatory sign-off when those apply.
  2. Match spend to risk — Save free and paid runs for changes where a missed bug is expensive (payments, auth, data migrations), not for typos in copy.
  3. Stack with repo habits — Teams already encode “how we review” in CI, CLAUDE.md, and skills (see gstack / slash stacks for an extreme open-source example). Use /review liberally; use /ultrareview when the diff deserves cloud depth.
  4. Security remains layered — For untrusted instruction surfaces in skills and MCP, read agent skills and security; ultrareview reviews your code—it is not a substitute for supply-chain controls on third-party packages and tools.

Read next: Claude Code, Pro, and the April 2026 pricing story · MCP explainer · Claude for Work hub

Research preview features change. Verify version (claude update), official ultrareview docs, and your plan’s extra usage settings before you rely on this in production release processes.

Related posts