ai-evals

refoundai/lenny-skills · updated Apr 8, 2026

$npx skills add https://github.com/refoundai/lenny-skills --skill ai-evals
0 commentsdiscussion
summary

Systematic evaluation framework for AI products using practitioner-driven methodologies.

  • Guides users through understanding what \"good\" looks like, designing rubrics and test cases, and implementing scoring criteria aligned with actual user needs
  • Emphasizes manual review and error analysis as prerequisites to building meaningful evals, with structured workflows for clustering failure patterns
  • Flags common pitfalls including vague criteria, LLM-as-judge without validation, and Liker
skill.md

AI Evals

Help the user create systematic evaluations for AI products using insights from AI practitioners.

How to Help

When the user asks for help with AI evals:

  1. Understand what they're evaluating - Ask what AI feature or model they're testing and what "good" looks like
  2. Help design the eval approach - Suggest rubrics, test cases, and measurement methods
  3. Guide implementation - Help them think through edge cases, scoring criteria, and iteration cycles
  4. Connect to product requirements - Ensure evals align with actual user needs, not just technical metrics

Core Principles

Evals are the new PRD

Brendan Foody: "If the model is the product, then the eval is the product requirement document." Evals define what success looks like in AI products—they're not optional quality checks, they're core specifications.

Evals are a core product skill

Hamel Husain & Shreya Shankar: "Both the chief product officers of Anthropic and OpenAI shared that evals are becoming the most important new skill for product builders." This isn't just for ML engineers—product people need to master this.

The workflow matters

Building good evals involves error analysis, open coding (writing down what's wrong), clustering failure patterns, and creating rubrics. It's a systematic process, not a one-time test.

Questions to Help Users

  • "What does 'good' look like for this AI output?"
  • "What are the most common failure modes you've seen?"
  • "How will you know if the model got better or worse?"
  • "Are you measuring what users actually care about?"
  • "Have you manually reviewed enough outputs to understand failure patterns?"

Common Mistakes to Flag

  • Skipping manual review - You can't write good evals without first understanding failure patterns through manual trace analysis
  • Using vague criteria - "The output should be good" isn't an eval; you need specific, measurable criteria
  • LLM-as-judge without validation - If using an LLM to judge, you must validate that judge against human experts
  • Likert scales over binary - Force Pass/Fail decisions; 1-5 scales produce meaningless averages

Deep Dive

For all 2 insights from 2 guests, see references/guest-insights.md

Related Skills

  • Building with LLMs
  • AI Product Strategy
  • Evaluating New Technology

Discussion

Product Hunt–style comments (not star reviews)
  • No comments yet — start the thread.
general reviews

Ratings

4.563 reviews
  • Ira Nasser· Dec 20, 2024

    Keeps context tight: ai-evals is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Jin Bhatia· Dec 16, 2024

    We added ai-evals from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Ira Tandon· Nov 11, 2024

    ai-evals is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Min Martinez· Nov 7, 2024

    ai-evals fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Min Rahman· Oct 26, 2024

    ai-evals has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Noah Tandon· Oct 2, 2024

    Solid pick for teams standardizing on skills: ai-evals is focused, and the summary matches what you get after install.

  • Mateo Reddy· Sep 25, 2024

    Registry listing for ai-evals matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Zaid Thomas· Sep 25, 2024

    Keeps context tight: ai-evals is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Omar Agarwal· Sep 9, 2024

    Useful defaults in ai-evals — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Noah Thompson· Sep 9, 2024

    ai-evals has been reliable in day-to-day use. Documentation quality is above average for community skills.

showing 1-10 of 63

1 / 7