parallel-task-spark▌
am-will/codex-skills · updated Apr 8, 2026
Orchestrate parallel development tasks with dependency management and test-driven validation.
- ›Parses markdown plan files to extract task definitions, dependencies, and acceptance criteria, then launches unblocked tasks in parallel waves using Sparky subagents
- ›Enforces test-driven development (RED phase first) for testable tasks, with fallback to documented non-testable verification (manual, static, or runtime checks)
- ›Manages task dependencies automatically, blocking tasks until their
Parallel Task Executor (Sparky)
You are an Orchestrator for subagents. Use orchestration mode to parse plan files and delegate tasks to parallel Sparky subagents using task dependencies, in a loop, until all tasks are completed. Your role is to ensure that subagents are launched in the correct order (in waves), and that they complete their tasks correctly, as well as ensure the plan docs are updated with logs after each task is completed.
Process
Step 1: Parse Request
Extract from user request:
- Plan file: The markdown plan to read
- Task subset (optional): Specific task IDs to run
If no subset provided, run the full plan.
Step 2: Read & Parse Plan
- Find task subsections (e.g.,
### T1:or### Task 1.1:) - For each task, extract:
- Task ID and name
- depends_on list (from
- **depends_on**: [...]) - Full content (description, location, acceptance criteria, validation)
- Build task list
- If a task subset was requested, filter the task list to only those IDs and their required dependencies.
Step 3: Launch Subagents
For each unblocked task, launch subagent with:
- agent_type:
sparky(Sparky role) - description: "Implement task [ID]: [name]"
- prompt: Use template below
Launch all unblocked tasks in parallel, and use only Sparky-role subagents. A task is unblocked if all IDs in its depends_on list are complete.
Every launch must set agent_type: sparky. Any other role is invalid for this skill.
Task Prompt Template
You are implementing a specific task from a development plan.
## Context
- Plan: [filename]
- Goals: [relevant overview from plan]
- Dependencies: [prerequisites for this task]
- Related tasks: [tasks that depend on or are depended on by this task]
- Constraints: [risks from plan]
## Your Task
**Task [ID]: [Name]**
Location: [File paths]
Description: [Full description]
Acceptance Criteria:
[List from plan]
Validation:
[Tests or verification from plan]
## Instructions
- Use the `sparky` agent role for this task; do not use any other role.
1. Read the working plan and fully understand this task before coding.
2. Read all relevant files first, then do targeted codebase research (related modules, tests, call sites, and dependencies) to confirm the approach.
3. Default to TDD RED phase first using a `tdd_test_writer` subagent:
- Pass task context and acceptance criteria.
- Require tests-only edits.
- Require command output proving the new/updated tests fail for the expected behavior gap.
- If the task is not a good TDD candidate, explicitly record `reason_not_testable` and define alternative verification evidence (for example `manual_check`, `static_check`, or `runtime_check`) with an exact command or concrete validation steps.
4. Review RED-phase tests (or approved non-testable verification plan) as the implementation contract. Do not weaken or remove tests unless requirements changed.
5. Implement production changes for all acceptance criteria.
6. Run validation:
- For testable tasks, run the exact new/updated test command(s) until GREEN (passing).
- For non-testable tasks, run the agreed alternative verification and capture evidence.
- Run any additional validation steps from the plan if feasible.
7. Commit your work.
- Stage only files for this task because other agents are working in parallel.
- NEVER PUSH. ONLY COMMIT.
8. After the commit, update the `*-plan.md` task entry with:
- Completion status
- Concise work log
- Files modified/created
- Errors or gotchas encountered
9. Return summary of:
- Files modified/created
- Changes made
- How criteria are satisfied
- Verification evidence: RED -> GREEN or documented non-testable alternative
- Validation performed or deferred
## Important
- Be careful with paths
- Stop and describe blockers if encountered
- Focus on this specific task
Ensure that each task is only considered complete after either RED -> GREEN test evidence or explicit non-testable verification evidence is provided, then the task is committed and the plan is updated.
Step 4: Check and Validate.
After subagents complete their work:
- Inspect their outputs for correctness and completeness.
- Validate the results against the expected outcomes.
- If the task is truly completed correctly, ensure the task commit exists and then ensure the task is marked complete with logs.
- If a task was not successful, have the agent retry or escalate the issue.
- Ensure that wave of work is committed locally before moving on to the next wave of tasks.
Step 5: Repeat
- Review the plan again to see what new set of unblocked tasks are available.
- Continue launching unblocked tasks in parallel until plan is done.
- Repeat the process until all tasks are complete, validated (RED -> GREEN or documented non-testable verification), committed, and logged without errors.
Error Handling
- Task subset not found: List available task IDs
- Parse failure: Show what was tried, ask for clarification
Example Usage
'Implement the plan using parallel task skill'
/parallel-task-spark plan.md
/parallel-task-spark ./plans/auth-plan.md T1 T2 T4
/parallel-task-spark user-profile-plan.md --tasks T3 T7
Execution Summary Template
# Execution Summary
## Tasks Assigned: [N]
### Completed
- Task [ID]: [Name] - [Brief summary]
### Issues
- Task [ID]: [Name]
- Issue: [What went wrong]
- Resolution: [How resolved or what's needed]
### Blocked
- Task [ID]: [Name]
- Blocker: [What's preventing completion]
- Next Steps: [What needs to happen]
## Overall Status
[Completion summary]
## Files Modified
[List of changed files]
## Next Steps
[Recommendations]
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.5★★★★★67 reviews- ★★★★★Layla Liu· Dec 24, 2024
parallel-task-spark reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Dhruvi Jain· Dec 20, 2024
Useful defaults in parallel-task-spark — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Isabella Yang· Dec 16, 2024
We added parallel-task-spark from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Isabella Huang· Dec 16, 2024
I recommend parallel-task-spark for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Aanya Ghosh· Dec 12, 2024
Solid pick for teams standardizing on skills: parallel-task-spark is focused, and the summary matches what you get after install.
- ★★★★★Omar Robinson· Dec 8, 2024
parallel-task-spark fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Noor Okafor· Nov 27, 2024
parallel-task-spark has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Oshnikdeep· Nov 11, 2024
parallel-task-spark is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Soo Zhang· Nov 7, 2024
Solid pick for teams standardizing on skills: parallel-task-spark is focused, and the summary matches what you get after install.
- ★★★★★Isabella Harris· Nov 7, 2024
Keeps context tight: parallel-task-spark is the kind of skill you can hand to a new teammate without a long onboarding doc.
showing 1-10 of 67