workshop-facilitation▌
deanpeters/product-manager-skills · updated Apr 8, 2026
Structured one-step-at-a-time facilitation pattern for interactive workshops and guided sessions.
- ›Supports three entry modes: Guided (single question per turn), Context Dump (paste known details and skip redundancies), and Best Guess (infer missing context with labeled assumptions)
- ›Provides real-time progress visibility with labels like Context Qx/8 and Scoring Qx/5 , plus enumerated recommendations only at decision points to avoid interaction drag
- ›Handles flexible multi-select respo
Purpose
Provide the canonical facilitation pattern for interactive skills: one step at a time, with clear progress, adaptive recommendations at decision points, and predictable interruption handling.
Key Concepts
- One-step-at-a-time: Ask a single targeted question per turn.
- Session heads-up + entry mode: Start by setting expectations and offering
Guided,Context dump, orBest guessmode. - Progress visibility: Show user-facing progress labels like
Context Qx/8andScoring Qx/5. - Decision-point recommendations: Use enumerated options only when a choice is needed, not after every answer.
- Quick-select response options: For regular context/scoring questions, provide concise numbered answer options plus
Other (specify)when useful. - Flexible selection parsing: Accept
#1,1,1 and 3,1,3, or custom text, then synthesize multi-select choices. - Context-aware progression: Build on previous answers and avoid re-asking resolved questions.
- Interruption-safe flow: Answer meta questions directly (for example, "how many left?"), restate status, then resume.
- Fast path: If the user requests a single-shot output, skip multi-turn facilitation and deliver a condensed result.
Application
- Start with a brief heads-up on estimated time and number of questions.
- Ask the user to choose an entry mode:
1Guided mode (one question at a time)2Context dump (paste known context; skip redundancies)3Best guess mode (infer missing details and label assumptions)
- Run one question per turn and wait for an answer before continuing.
- Keep questions plain-language; include a short example response format when helpful.
- Show progress each turn:
Context Qx/8during context collectionScoring Qx/5during assessment/scoring
- Ask follow-up clarifications only when they materially improve recommendation quality.
- For regular context/scoring questions, offer quick-select numbered response options when practical:
- Keep options concise and mutually exclusive when possible.
- Include
Other (specify)if likely answers are open-ended. - Accept multi-select responses like
1,3or1 and 3.
- Provide numbered recommendations only at decision points:
- after context synthesis,
- after maturity/profile synthesis,
- during priority/action-plan selection.
- Accept numeric or custom choices, synthesize multi-select choices, and continue.
- If interrupted by a meta question, answer directly, then restate progress and pending question.
- If the user says stop/pause, halt immediately and wait for explicit resume.
- End with a clear summary, decisions made, and (if best guess mode was used) an
Assumptions to Validatelist.
Examples
Opening: "Quick heads-up: this should take about 7-10 minutes and around 10 questions. How do you want to start?
- Guided mode
- Context dump
- Best guess mode"
User: "2"
Facilitator: "Paste what you already know. I’ll skip answered areas and ask only what’s missing."
Decision point after synthesis:
- Prioritize Context Design (Recommended)
- Prioritize Agent Orchestration
- Prioritize Team-AI Facilitation
User: "1 and 3"
Facilitator: "Great. We’ll run Context Design first, with Team-AI Facilitation in parallel."
Common Pitfalls
- Asking multiple questions in the same turn.
- Offering recommendations after every answer (creates interaction drag).
- Using shorthand labels without plain-language questions.
- Hiding progress, so users don't know how much remains.
- Ignoring the user's chosen option or custom direction.
- Failing to label assumptions when running in best-guess mode.
References
- Use as the source of truth for interactive facilitation behavior.
- Apply alongside workshop skills in
skills/*-workshop/SKILL.mdand advisor-style interactive skills.
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.5★★★★★75 reviews- ★★★★★Kaira Sanchez· Dec 16, 2024
workshop-facilitation has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Arjun Wang· Dec 8, 2024
workshop-facilitation reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Nia Torres· Dec 8, 2024
Registry listing for workshop-facilitation matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Kaira Ramirez· Dec 4, 2024
workshop-facilitation is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Michael Jain· Nov 27, 2024
workshop-facilitation is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Valentina Desai· Nov 23, 2024
workshop-facilitation reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Chen Li· Nov 7, 2024
Solid pick for teams standardizing on skills: workshop-facilitation is focused, and the summary matches what you get after install.
- ★★★★★Valentina Torres· Nov 7, 2024
Keeps context tight: workshop-facilitation is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Michael Okafor· Oct 26, 2024
We added workshop-facilitation from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Valentina Bhatia· Oct 26, 2024
workshop-facilitation has been reliable in day-to-day use. Documentation quality is above average for community skills.
showing 1-10 of 75