Game QA & Testing

perf-profile

Donchitos/Claude-Code-Game-Studios · updated Apr 16, 2026

$npx skills add https://github.com/Donchitos/Claude-Code-Game-Studios --skill perf-profile
summary

### Perf Profile

  • description: "Structured performance profiling workflow. Identifies bottlenecks, measures against budgets, and generates optimization recommendations with priority rankings."
  • argument-hint: "[system-name or 'full']"
  • agent: performance-analyst
skill.md

Phase 1: Determine Scope

Read the argument:

  • System name → focus profiling on that specific system
  • full → run a comprehensive profile across all systems

Phase 2: Load Performance Budgets

Check for existing performance targets in design docs or CLAUDE.md:

  • Target FPS (e.g., 60fps = 16.67ms frame budget)
  • Memory budget (total and per-system)
  • Load time targets
  • Draw call budgets
  • Network bandwidth limits (if multiplayer)

Phase 3: Analyze Codebase

CPU Profiling Targets:

  • _process() / Update() / Tick() functions — list all and estimate cost
  • Nested loops over large collections
  • String operations in hot paths
  • Allocation patterns in per-frame code
  • Unoptimized search/sort over game entities
  • Expensive physics queries (raycasts, overlaps) every frame

Memory Profiling Targets:

  • Large data structures and their growth patterns
  • Texture/asset memory footprint estimates
  • Object pool vs instantiate/destroy patterns
  • Leaked references (objects that should be freed but aren't)
  • Cache sizes and eviction policies

Rendering Targets (if applicable):

  • Draw call estimates
  • Overdraw from overlapping transparent objects
  • Shader complexity
  • Unoptimized particle systems
  • Missing LODs or occlusion culling

I/O Targets:

  • Save/load performance
  • Asset loading patterns (sync vs async)
  • Network message frequency and size

Phase 4: Generate Profiling Report

## Performance Profile: [System or Full]
Generated: [Date]

### Performance Budgets
| Metric | Budget | Estimated Current | Status |
|--------|--------|-------------------|--------|
| Frame time | [16.67ms] | [estimate] | [OK/WARNING/OVER] |
| Memory | [target] | [estimate] | [OK/WARNING/OVER] |
| Load time | [target] | [estimate] | [OK/WARNING/OVER] |
| Draw calls | [target] | [estimate] | [OK/WARNING/OVER] |

### Hotspots Identified
| # | Location | Issue | Estimated Impact | Fix Effort |
|---|----------|-------|------------------|------------|

### Optimization Recommendations (Priority Order)
1. **[Title]** — [Description]
   - Location: [file:line]
   - Expected gain: [estimate]
   - Risk: [Low/Med/High]
   - Approach: [How to implement]

### Quick Wins (< 1 hour each)
- [Simple optimization 1]

### Requires Investigation
- [Area that needs actual runtime profiling to confirm impact]

Output the report with a summary: top 3 hotspots, estimated headroom vs budget, and recommended next action.


Phase 5: Scope and Timeline Decision

Activate this phase only if any hotspot has Fix Effort rated M or L.

Present significant-effort items and ask the user to choose for each:

  • A) Implement the optimization (proceed with fix now or schedule it)
  • B) Reduce feature scope (run /scope-check [feature] to analyze trade-offs)
  • C) Accept the performance hit and defer to Polish phase (log as known issue)
  • D) Escalate to technical-director for an architectural decision (run /architecture-decision)

If multiple items are deferred to Polish (choice C), record them under ### Deferred to Polish.

This skill is read-only — no files are written. Verdict: COMPLETE — performance profile generated.


Phase 6: Next Steps

  • If bottlenecks require architectural change: run /architecture-decision.
  • If scope reduction is needed: run /scope-check [feature].
  • To schedule optimizations: run /sprint-plan update.

Rules

  • Never optimize without measuring first — gut feelings about performance are unreliable
  • Recommendations must include estimated impact — "make it faster" is not actionable
  • Profile on target hardware, not just development machines
  • Static analysis (this skill) identifies candidates; runtime profiling confirms
general reviews

Ratings

4.867 reviews
  • Layla Menon· Dec 28, 2024

    Registry listing for perf-profile matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Kofi White· Dec 20, 2024

    Useful defaults in perf-profile — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Amelia Farah· Dec 20, 2024

    I recommend perf-profile for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Ganesh Mohane· Dec 16, 2024

    perf-profile fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Sophia Bansal· Dec 12, 2024

    I recommend perf-profile for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Diya Ghosh· Dec 12, 2024

    perf-profile reduced setup friction for our internal harness; good balance of opinion and flexibility.

  • Chen Chen· Dec 8, 2024

    Solid pick for teams standardizing on skills: perf-profile is focused, and the summary matches what you get after install.

  • Kofi Harris· Nov 27, 2024

    I recommend perf-profile for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Nia Martin· Nov 11, 2024

    perf-profile has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Ira Reddy· Nov 11, 2024

    Solid pick for teams standardizing on skills: perf-profile is focused, and the summary matches what you get after install.

showing 1-10 of 67

1 / 7