team-polish▌
Donchitos/Claude-Code-Game-Studios · updated Apr 16, 2026
### Team Polish
- ›description: "Orchestrate the polish team: coordinates performance-analyst, technical-artist, sound-designer, and qa-tester to optimize, polish, and harden a feature or area for release quality."
- ›argument-hint: "[feature or area to polish]"
- ›allowed-tools: Read, Glob, Grep, Write, Edit, Bash, Task, AskUserQuestion, TodoWrite
If no argument is provided, output usage guidance and exit without spawning any agents:
Usage:
/team-polish [feature or area]— specify the feature or area to polish (e.g.,combat,main menu,inventory system,level-1). Do not useAskUserQuestionhere; output the guidance directly.
When this skill is invoked with an argument, orchestrate the polish team through a structured pipeline.
Decision Points: At each phase transition, use AskUserQuestion to present
the user with the subagent's proposals as selectable options. Write the agent's
full analysis in conversation, then capture the decision with concise labels.
The user must approve before moving to the next phase.
Team Composition
- performance-analyst — Profiling, optimization, memory analysis, frame budget
- engine-programmer — Engine-level bottlenecks: rendering pipeline, memory, resource loading (invoke when performance-analyst identifies low-level root causes)
- technical-artist — VFX polish, shader optimization, visual quality
- sound-designer — Audio polish, mixing, ambient layers, feedback sounds
- tools-programmer — Content pipeline tool verification, editor tool stability, automation fixes (invoke when content authoring tools are involved in the polished area)
- qa-tester — Edge case testing, regression testing, soak testing
How to Delegate
Use the Task tool to spawn each team member as a subagent:
subagent_type: performance-analyst— Profiling, optimization, memory analysissubagent_type: engine-programmer— Engine-level fixes for rendering, memory, resource loadingsubagent_type: technical-artist— VFX polish, shader optimization, visual qualitysubagent_type: sound-designer— Audio polish, mixing, ambient layerssubagent_type: tools-programmer— Content pipeline and editor tool verificationsubagent_type: qa-tester— Edge case testing, regression testing, soak testing
Always provide full context in each agent's prompt (target feature/area, performance budgets, known issues). Launch independent agents in parallel where the pipeline allows it (e.g., Phases 3 and 4 can run simultaneously).
Pipeline
Phase 1: Assessment
Delegate to performance-analyst:
- Profile the target feature/area using
/perf-profile - Identify performance bottlenecks and frame budget violations
- Measure memory usage and check for leaks
- Benchmark against target hardware specs
- Output: performance report with prioritized optimization list
Phase 2: Optimization
Delegate to performance-analyst (with relevant programmers as needed):
- Fix performance hotspots identified in Phase 1
- Optimize draw calls, reduce overdraw
- Fix memory leaks and reduce allocation pressure
- Verify optimizations don't change gameplay behavior
- Output: optimized code with before/after metrics
If Phase 1 identified engine-level root causes (rendering pipeline, resource loading, memory allocator), delegate those fixes to engine-programmer in parallel:
- Optimize hot paths in engine systems
- Fix allocation pressure in core loops
- Output: engine-level fixes with profiler validation
Phase 3: Visual Polish (parallel with Phase 2)
Delegate to technical-artist:
- Review VFX for quality and consistency with art bible
- Optimize particle systems and shader effects
- Add screen shake, camera effects, and visual juice where appropriate
- Ensure effects degrade gracefully on lower settings
- Output: polished visual effects
Phase 4: Audio Polish (parallel with Phase 2)
Delegate to sound-designer:
- Review audio events for completeness (are any actions missing sound feedback?)
- Check audio mix levels — nothing too loud or too quiet relative to the mix
- Add ambient audio layers for atmosphere
- Verify audio plays correctly with spatial positioning
- Output: audio polish list and mixing notes
Phase 5: Hardening
Delegate to qa-tester:
- Test all edge cases: boundary conditions, rapid inputs, unusual sequences
- Soak test: run the feature for extended periods checking for degradation
- Stress test: maximum entities, worst-case scenarios
- Regression test: verify polish changes haven't broken existing functionality
- Test on minimum spec hardware (if available)
- Output: test results with any remaining issues
Phase 6: Sign-off
- Collect results from all team members
- Compare performance metrics against budgets
- Report: READY FOR RELEASE / NEEDS MORE WORK
- List any remaining issues with severity and recommendations
Error Recovery Protocol
If any spawned agent (via Task) returns BLOCKED, errors, or cannot complete:
- Surface immediately: Report "[AgentName]: BLOCKED — [reason]" to the user before continuing to dependent phases
- Assess dependencies: Check whether the blocked agent's output is required by subsequent phases. If yes, do not proceed past that dependency point without user input.
- Offer options via AskUserQuestion with choices:
- Skip this agent and note the gap in the final report
- Retry with narrower scope
- Stop here and resolve the blocker first
- Always produce a partial report — output whatever was completed. Never discard work because one agent blocked.
Common blockers:
- Input file missing (story not found, GDD absent) → redirect to the skill that creates it
- ADR status is Proposed → do not implement; run
/architecture-decisionfirst - Scope too large → split into two stories via
/create-stories - Conflicting instructions between ADR and story → surface the conflict, do not guess
File Write Protocol
All file writes (performance reports, test results, evidence docs) are delegated to sub-agents spawned via Task. Each sub-agent enforces the "May I write to [path]?" protocol. This orchestrator does not write files directly.
Output
A summary report covering: performance before/after metrics, visual polish changes, audio polish changes, test results, and release readiness assessment.
Next Steps
- If READY FOR RELEASE: run
/release-checklistfor the final pre-release validation. - If NEEDS MORE WORK: schedule remaining issues in
/sprint-plan updateand re-run/team-polishafter fixes. - Run
/gate-checkfor a formal phase gate verdict before handing off to release.
Ratings
4.6★★★★★43 reviews- ★★★★★Neel Abebe· Dec 24, 2024
Keeps context tight: team-polish is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Shikha Mishra· Dec 20, 2024
team-polish has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Alexander Sanchez· Dec 16, 2024
team-polish reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Kofi Kapoor· Dec 4, 2024
Registry listing for team-polish matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Anika Singh· Nov 23, 2024
Solid pick for teams standardizing on skills: team-polish is focused, and the summary matches what you get after install.
- ★★★★★Nikhil Bansal· Nov 7, 2024
team-polish is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Anika Harris· Nov 3, 2024
team-polish has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Aanya Taylor· Oct 26, 2024
Useful defaults in team-polish — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Min Menon· Oct 22, 2024
Keeps context tight: team-polish is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Daniel Abbas· Oct 14, 2024
We added team-polish from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
showing 1-10 of 43