agent-evaluation▌
sickn33/antigravity-awesome-skills · updated Apr 8, 2026
Framework for testing LLM agents across behavioral, capability, and reliability dimensions with production-focused evaluation patterns.
- ›Covers five core evaluation areas: agent testing, benchmark design, capability assessment, reliability metrics, and regression testing
- ›Emphasizes statistical test evaluation (multiple runs with distribution analysis) and behavioral contract testing over single-run or string-matching approaches
- ›Includes adversarial testing patterns and guards against
Agent Evaluation
You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.
You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it
Capabilities
- agent-testing
- benchmark-design
- capability-assessment
- reliability-metrics
- regression-testing
Requirements
- testing-fundamentals
- llm-fundamentals
Patterns
Statistical Test Evaluation
Run tests multiple times and analyze result distributions
Behavioral Contract Testing
Define and test agent behavioral invariants
Adversarial Testing
Actively try to break agent behavior
Anti-Patterns
❌ Single-Run Testing
❌ Only Happy Path Tests
❌ Output String Matching
⚠️ Sharp Edges
| Issue | Severity | Solution |
|---|---|---|
| Agent scores well on benchmarks but fails in production | high | // Bridge benchmark and production evaluation |
| Same test passes sometimes, fails other times | high | // Handle flaky tests in LLM agent evaluation |
| Agent optimized for metric, not actual task | medium | // Multi-dimensional evaluation to prevent gaming |
| Test data accidentally used in training or prompts | critical | // Prevent data leakage in agent evaluation |
Related Skills
Works well with: multi-agent-orchestration, agent-communication, autonomous-agents
When to Use
This skill is applicable to execute the workflow or actions described in the overview.
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.6★★★★★25 reviews- ★★★★★Benjamin Okafor· Dec 24, 2024
I recommend agent-evaluation for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Chaitanya Patil· Dec 16, 2024
Solid pick for teams standardizing on skills: agent-evaluation is focused, and the summary matches what you get after install.
- ★★★★★Amelia Liu· Nov 15, 2024
Keeps context tight: agent-evaluation is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Piyush G· Nov 7, 2024
We added agent-evaluation from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Shikha Mishra· Oct 26, 2024
agent-evaluation fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Daniel Yang· Oct 6, 2024
agent-evaluation is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Yash Thakker· Sep 17, 2024
I recommend agent-evaluation for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Noor Patel· Aug 28, 2024
Solid pick for teams standardizing on skills: agent-evaluation is focused, and the summary matches what you get after install.
- ★★★★★Dhruvi Jain· Aug 8, 2024
Useful defaults in agent-evaluation — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Oshnikdeep· Jul 27, 2024
agent-evaluation is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
showing 1-10 of 25