Game Dev Teams

team-level

Donchitos/Claude-Code-Game-Studios · updated Apr 16, 2026

$npx skills add https://github.com/Donchitos/Claude-Code-Game-Studios --skill team-level
summary

### Team Level

  • description: "Orchestrate level design team: level-designer + narrative-director + world-builder + art-director + systems-designer + qa-tester for complete area/level creation."
  • argument-hint: "[level name or area to design]"
  • allowed-tools: Read, Glob, Grep, Write, Edit, Bash, Task, AskUserQuestion, TodoWrite
skill.md

When this skill is invoked:

Decision Points: At each step transition, use AskUserQuestion to present the user with the subagent's proposals as selectable options. Write the agent's full analysis in conversation, then capture the decision with concise labels. The user must approve before moving to the next step.

  1. Read the argument for the target level or area (e.g., tutorial, forest dungeon, hub town, final boss arena).

  2. Gather context:

    • Read the game concept at design/gdd/game-concept.md
    • Read game pillars at design/gdd/game-pillars.md
    • Read existing level docs in design/levels/
    • Read relevant narrative docs in design/narrative/
    • Read world-building docs for the area's region/faction

How to Delegate

Use the Task tool to spawn each team member as a subagent:

  • subagent_type: narrative-director — Narrative purpose, characters, emotional arc
  • subagent_type: world-builder — Lore context, environmental storytelling, world rules
  • subagent_type: level-designer — Spatial layout, pacing, encounters, navigation
  • subagent_type: systems-designer — Enemy compositions, loot tables, difficulty balance
  • subagent_type: art-director — Visual theme, color palette, lighting, asset requirements
  • subagent_type: accessibility-specialist — Navigation clarity, colorblind safety, cognitive load
  • subagent_type: qa-tester — Test cases, boundary testing, playtest checklist

Always provide full context in each agent's prompt (game concept, pillars, existing level docs, narrative docs).

  1. Orchestrate the level design team in sequence:

Step 1: Narrative + Visual Direction (narrative-director + world-builder + art-director, parallel)

Spawn all three agents simultaneously — issue all three Task calls before waiting for any result.

Spawn the narrative-director agent to:

  • Define the narrative purpose of this area (what story beats happen here?)
  • Identify key characters, dialogue triggers, and lore elements
  • Specify emotional arc (how should the player feel entering, during, leaving?)

Spawn the world-builder agent to:

  • Provide lore context for the area (history, faction presence, ecology)
  • Define environmental storytelling opportunities
  • Specify any world rules that affect gameplay in this area

Spawn the art-director agent to:

  • Establish visual theme targets for this area — these are INPUTS to layout, not outputs of it
  • Define the color temperature and lighting mood for this area (how does it differ from adjacent areas?)
  • Specify shape language direction (angular fortress? organic cave? decayed grandeur?)
  • Name the primary visual landmarks that will orient the player
  • Read design/art/art-bible.md if it exists — anchor all direction in the established art bible

The art-director's visual targets from Step 1 must be passed to the level-designer in Step 2 as explicit constraints. Layout decisions happen within the visual direction, not before it.

Gate: Use AskUserQuestion to present all three Step 1 outputs (narrative brief, lore foundation, visual direction targets) and confirm before proceeding to Step 2.

Step 2: Layout and Encounter Design (level-designer)

Spawn the level-designer agent with the full Step 1 output as context:

  • Narrative brief (from narrative-director)
  • Lore foundation (from world-builder)
  • Visual direction targets (from art-director) — layout must work within these targets, not contradict them

The level-designer should:

  • Design the spatial layout (critical path, optional paths, secrets) — ensuring primary routes align with the visual landmark targets from Step 1
  • Define pacing curve (tension peaks, rest areas, exploration zones) — coordinated with the emotional arc from narrative-director
  • Place encounters with difficulty progression
  • Design environmental puzzles or navigation challenges
  • Define points of interest and landmarks for wayfinding — these must match the visual landmarks the art-director specified
  • Specify entry/exit points and connections to adjacent areas

Adjacent area dependency check: After the layout is produced, check design/levels/ for each adjacent area referenced by the level-designer. If any referenced area's .md file does not exist, surface the gap:

"Level references [area-name] as an adjacent area but design/levels/[area-name].md does not exist."

Use AskUserQuestion with options:

  • (a) Proceed with a placeholder reference — mark the connection as UNRESOLVED in the level doc and list it in the open cross-level dependencies section of the summary report
  • (b) Pause and run /team-level [area-name] first to establish that area

Do NOT invent content for the missing adjacent area.

Gate: Use AskUserQuestion to present Step 2 layout (including any unresolved adjacent area dependencies) and confirm before proceeding to Step 3.

Step 3: Systems Integration (systems-designer)

Spawn the systems-designer agent to:

  • Specify enemy compositions and encounter formulas
  • Define loot tables and reward placement
  • Balance difficulty relative to expected player level/gear
  • Design any area-specific mechanics or environmental hazards
  • Specify resource distribution (health pickups, save points, shops)

Gate: Use AskUserQuestion to present Step 3 outputs and confirm before proceeding to Step 4.

Step 4: Production Concepts + Accessibility (art-director + accessibility-specialist, parallel)

Note: The art-director's directional pass (visual theme, color targets, mood) happened in Step 1. This pass is location-specific production concepts — given the finalized layout, what does each specific space look like?

Spawn the art-director agent with the finalized layout from Step 2:

  • Produce location-specific concept specs for key spaces (entrance, key encounter zones, landmarks, exits)
  • Specify which art assets are unique to this area vs. shared from the global pool
  • Define sight-line and lighting setups per key space (these are now layout-informed, not directional)
  • Specify VFX needs that are specific to this area's layout (weather volumes, particles, atmospheric effects)
  • Flag any locations where the layout creates visual direction conflicts with the Step 1 targets — surface these as production risks

Spawn the accessibility-specialist agent in parallel to:

  • Review the level layout for navigation clarity (can players orient themselves without relying on color alone?)
  • Check that critical path signposting uses shape/icon/sound cues in addition to color
  • Review any puzzle mechanics for cognitive load — flag anything that requires holding more than 3 simultaneous states
  • Check that key gameplay areas have sufficient contrast for colorblind players
  • Output: accessibility concerns list with severity (BLOCKING / RECOMMENDED / NICE TO HAVE)

Wait for both agents to return before proceeding.

Gate: Use AskUserQuestion to present both Step 4 results. If the accessibility-specialist returned any BLOCKING concerns, highlight them prominently and offer:

  • (a) Return to level-designer and art-director to redesign the flagged elements before Step 5
  • (b) Document as a known accessibility gap and proceed to Step 5 with the concern explicitly logged in the final report

Do NOT proceed to Step 5 without the user acknowledging any BLOCKING accessibility concerns.

Step 5: QA Planning (qa-tester)

Spawn the qa-tester agent to:

  • Write test cases for the critical path
  • Identify boundary and edge cases (sequence breaks, softlocks)
  • Create a playtest checklist for the area
  • Define acceptance criteria for level completion
  1. Compile the level design document combining all team outputs into the level design template format.

  2. Save to design/levels/[level-name].md.

  3. Output a summary with: area overview, encounter count, estimated asset list, narrative beats, any cross-team dependencies or open questions, open cross-level dependencies (adjacent areas referenced but not yet designed, each marked UNRESOLVED), and accessibility concerns with their resolution status.

File Write Protocol

All file writes (level design docs, narrative docs, test checklists) are delegated to sub-agents spawned via Task. Each sub-agent enforces the "May I write to [path]?" protocol. This orchestrator does not write files directly.

Verdict: COMPLETE — level design document produced and all team outputs compiled. Verdict: BLOCKED — one or more agents blocked; partial report produced with unresolved items listed.

Next Steps

  • Run /design-review design/levels/[level-name].md to validate the completed level design doc.
  • Run /dev-story to implement level content once the design is approved.
  • Run /qa-plan to generate a QA test plan for this level.

Error Recovery Protocol

If any spawned agent (via Task) returns BLOCKED, errors, or cannot complete:

  1. Surface immediately: Report "[AgentName]: BLOCKED — [reason]" to the user before continuing to dependent phases
  2. Assess dependencies: Check whether the blocked agent's output is required by subsequent phases. If yes, do not proceed past that dependency point without user input.
  3. Offer options via AskUserQuestion with choices:
    • Skip this agent and note the gap in the final report
    • Retry with narrower scope
    • Stop here and resolve the blocker first
  4. Always produce a partial report — output whatever was completed. Never discard work because one agent blocked.

Common blockers:

  • Input file missing (story not found, GDD absent) → redirect to the skill that creates it
  • ADR status is Proposed → do not implement; run /architecture-decision first
  • Scope too large → split into two stories via /create-stories
  • Conflicting instructions between ADR and story → surface the conflict, do not guess
general reviews

Ratings

4.454 reviews
  • Diego Gill· Dec 28, 2024

    team-level has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Meera Lopez· Dec 28, 2024

    Registry listing for team-level matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Pratham Ware· Dec 24, 2024

    Keeps context tight: team-level is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Sofia Brown· Dec 20, 2024

    team-level fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Diego Rao· Nov 19, 2024

    team-level fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Naina Menon· Nov 19, 2024

    Useful defaults in team-level — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Aisha Chen· Nov 11, 2024

    team-level has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Arjun Verma· Oct 10, 2024

    We added team-level from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Naina Iyer· Oct 10, 2024

    I recommend team-level for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Aarav Farah· Oct 2, 2024

    Solid pick for teams standardizing on skills: team-level is focused, and the summary matches what you get after install.

showing 1-10 of 54

1 / 6