agent-evaluation

davila7/claude-code-templates · updated Apr 8, 2026

$npx skills add https://github.com/davila7/claude-code-templates --skill agent-evaluation
0 commentsdiscussion
summary

Behavioral testing and reliability metrics for LLM agents, catching production failures benchmarks miss.

  • Covers five core evaluation areas: agent testing, benchmark design, capability assessment, reliability metrics, and regression testing
  • Emphasizes statistical test evaluation (multiple runs, result distribution analysis) and behavioral contract testing over single-run or string-matching approaches
  • Includes adversarial testing patterns to actively probe agent failure modes and ident
skill.md

Agent Evaluation

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.

You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it

Capabilities

  • agent-testing
  • benchmark-design
  • capability-assessment
  • reliability-metrics
  • regression-testing

Requirements

  • testing-fundamentals
  • llm-fundamentals

Patterns

Statistical Test Evaluation

Run tests multiple times and analyze result distributions

Behavioral Contract Testing

Define and test agent behavioral invariants

Adversarial Testing

Actively try to break agent behavior

Anti-Patterns

❌ Single-Run Testing

❌ Only Happy Path Tests

❌ Output String Matching

⚠️ Sharp Edges

Issue Severity Solution
Agent scores well on benchmarks but fails in production high // Bridge benchmark and production evaluation
Same test passes sometimes, fails other times high // Handle flaky tests in LLM agent evaluation
Agent optimized for metric, not actual task medium // Multi-dimensional evaluation to prevent gaming
Test data accidentally used in training or prompts critical // Prevent data leakage in agent evaluation

Related Skills

Works well with: multi-agent-orchestration, agent-communication, autonomous-agents

Discussion

Product Hunt–style comments (not star reviews)
  • No comments yet — start the thread.
general reviews

Ratings

4.568 reviews
  • Aditi Rahman· Dec 28, 2024

    We added agent-evaluation from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Kabir Flores· Dec 24, 2024

    Registry listing for agent-evaluation matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Pratham Ware· Dec 16, 2024

    agent-evaluation fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Li Ramirez· Dec 12, 2024

    agent-evaluation fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Emma Ndlovu· Dec 12, 2024

    Keeps context tight: agent-evaluation is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Zara Johnson· Dec 4, 2024

    agent-evaluation has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Chinedu Ndlovu· Dec 4, 2024

    agent-evaluation reduced setup friction for our internal harness; good balance of opinion and flexibility.

  • Fatima Iyer· Nov 23, 2024

    I recommend agent-evaluation for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Chen Khanna· Nov 19, 2024

    Keeps context tight: agent-evaluation is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Emma Ramirez· Nov 15, 2024

    Useful defaults in agent-evaluation — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

showing 1-10 of 68

1 / 7