Productivity

skill-comply

affaan-m/everything-claude-code · updated Apr 8, 2026

$npx skills add https://github.com/affaan-m/everything-claude-code --skill skill-comply
summary

Measures whether coding agents actually follow skills, rules, or agent definitions by:

skill.md

skill-comply: Automated Compliance Measurement

Measures whether coding agents actually follow skills, rules, or agent definitions by:

  1. Auto-generating expected behavioral sequences (specs) from any .md file
  2. Auto-generating scenarios with decreasing prompt strictness (supportive → neutral → competing)
  3. Running claude -p and capturing tool call traces via stream-json
  4. Classifying tool calls against spec steps using LLM (not regex)
  5. Checking temporal ordering deterministically
  6. Generating self-contained reports with spec, prompts, and timelines

Supported Targets

  • Skills (skills/*/SKILL.md): Workflow skills like search-first, TDD guides
  • Rules (rules/common/*.md): Mandatory rules like testing.md, security.md, git-workflow.md
  • Agent definitions (agents/*.md): Whether an agent gets invoked when expected (internal workflow verification not yet supported)

When to Activate

  • User runs /skill-comply <path>
  • User asks "is this rule actually being followed?"
  • After adding new rules/skills, to verify agent compliance
  • Periodically as part of quality maintenance

Usage

# Full run
uv run python -m scripts.run ~/.claude/rules/common/testing.md

# Dry run (no cost, spec + scenarios only)
uv run python -m scripts.run --dry-run ~/.claude/skills/search-first/SKILL.md

# Custom models
uv run python -m scripts.run --gen-model haiku --model sonnet <path>

Key Concept: Prompt Independence

Measures whether a skill/rule is followed even when the prompt doesn't explicitly support it.

Report Contents

Reports are self-contained and include:

  1. Expected behavioral sequence (auto-generated spec)
  2. Scenario prompts (what was asked at each strictness level)
  3. Compliance scores per scenario
  4. Tool call timelines with LLM classification labels

Advanced (optional)

For users familiar with hooks, reports also include hook promotion recommendations for steps with low compliance. This is informational — the main value is the compliance visibility itself.

general reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    skill-comply is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Piyush G· Sep 9, 2024

    Keeps context tight: skill-comply is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Chaitanya Patil· Aug 8, 2024

    Registry listing for skill-comply matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Sakshi Patil· Jul 7, 2024

    skill-comply reduced setup friction for our internal harness; good balance of opinion and flexibility.

  • Ganesh Mohane· Jun 6, 2024

    I recommend skill-comply for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Oshnikdeep· May 5, 2024

    Useful defaults in skill-comply — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Dhruvi Jain· Apr 4, 2024

    skill-comply has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Rahul Santra· Mar 3, 2024

    Solid pick for teams standardizing on skills: skill-comply is focused, and the summary matches what you get after install.

  • Pratham Ware· Feb 2, 2024

    We added skill-comply from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Yash Thakker· Jan 1, 2024

    skill-comply fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.