Pre-mortem review is a decades-old planning technique: assume failure, work backward, and surface risks early before you sink cost into the wrong design. Gary Klein described it for teams; in product engineering circles the tiger / paper tiger / elephant vocabulary—often associated with Shreyas Doshi’s writing on product risk—is a compact way to sort real threats from noise and taboo topics.
For coding agents, the hard part is not the metaphor—it is discipline. The premortem skill in parcadei/continuous-claude-v3 bakes in verification rules so the model does not treat every suspicious line as a crisis.
Canonical registry listing: premortem — ExplainX.
TL;DR
| Question | Short answer |
|---|---|
| What is it? | An agent skill that runs a structured pre-mortem with quick and deep depths and YAML-shaped outputs. |
| Why it matters | Forces two-pass reasoning: candidates → verified risks with mitigation_checked evidence. |
| Install | npx skills add https://github.com/parcadei/continuous-claude-v3 --skill premortem |
| Browse | explainx.ai/skills/.../premortem |
| Best for | PRs, RFCs, before large refactors, and any workflow where false-positive “security theater” wastes time. |
Why “verify before you flag” belongs in a skill
Large language models are good at sounding alarmed. Without guardrails, an agent can:
- Flag a hardcoded path without checking for an
exists()guard three lines later. - Call something “missing error handling” without tracing the call path.
- Confuse out-of-scope work with an implementation bug.
The premortem skill encodes an explicit anti-pattern list and a verification checklist (context ±20 lines, fallback branches, scope, dev-only code). If a check is unknown, the instruction set tells the model not to promote the finding to a tiger. That is harness behavior—policy at the tooling layer, not vibes in a one-off chat.
The two-pass workflow (how to read the SKILL.md)
- Pass 1 — candidates: Collect
potential_risksusing normal scanning (pattern match, intuition, diff review). - Pass 2 — verification: For each candidate, decide
tiger·paper_tiger·false_alarm.
True tigers require a filled mitigation_checked field: what mitigations you looked for and did not find. If you cannot write that line with concrete evidence, the finding stays a candidate or becomes a false alarm.
Paper tigers get the opposite treatment: cite where the mitigation lives (file:lines).
Elephants capture the awkward, under-discussed risks—often process or political, not a missing try/catch.
Slash-style usage (from the upstream skill)
The packaged workflow expects intentful depth:
/premortem— auto-detect context; offer quick vs deep./premortem quick— plans, PRs, localized edits./premortem deep— before a big implementation push./premortem <file>— focus a plan or module.
Exact slash wiring depends on your agent host (Claude Code, Cursor, etc.); the value is the checklist + output schema, not the literal command prefix.
Install and pin
From the ExplainX listing:
npx skills add https://github.com/parcadei/continuous-claude-v3 --skill premortem
For team repos, pair installs with a committed skills-lock.json so everyone gets the same instruction pack revision—see our skills-lock.json primer.
Related on ExplainX
- What are agent skills? — mental model and packaging
- skills-lock.json: reproducible agent skills — locking installs
- Agent skills security — review before you trust new skills
- Context engineering and clean prompts — structured context patterns
Sources
- Registry: premortem — explainx.ai
- Upstream repo: github.com/parcadei/continuous-claude-v3
- Skills CLI (ecosystem): github.com/vercel-labs/skills
- Pre-mortem technique: Gary Klein — Performing a Project Premortem (HBR)
Skill contents and CLI flags change over time. Confirm behavior against parcadei/continuous-claude-v3 and your installed npx skills version before relying on this in production workflows.