dogfood

callstackincubator/agent-device · updated Apr 8, 2026

$npx skills add https://github.com/callstackincubator/agent-device --skill dogfood
0 commentsdiscussion
summary

Systematically explore a mobile app, find issues, and produce a report with full reproduction evidence for every finding.

skill.md

Dogfood (agent-device)

Systematically explore a mobile app, find issues, and produce a report with full reproduction evidence for every finding.

Setup

Only the Target app is required. Everything else has sensible defaults.

Parameter Default Example override
Target app (required) Settings, com.example.app, deep link URL
Platform Infer from user context; otherwise ask (ios or android) --platform ios
Session name Slugified app/platform (for example settings-ios) --session my-session
Output directory ./dogfood-output/ Output directory: /tmp/mobile-qa
Scope Full app Focus on onboarding and profile
Authentication None Sign in to user@example.com

If the user gives enough context to start, begin immediately with defaults. Ask follow-up only when a required detail is missing (for example platform or credentials).

Prefer direct agent-device binary when available.

Workflow

1. Initialize    Set up session, output dirs, report file
2. Launch/Auth   Open app and sign in if needed
3. Orient        Capture initial snapshot and map navigation
4. Explore       Systematically test flows and states
5. Document      Record reproducible evidence per issue
6. Wrap up       Reconcile summary, close session

1. Initialize

mkdir -p {OUTPUT_DIR}/screenshots {OUTPUT_DIR}/videos
cp {SKILL_DIR}/templates/dogfood-report-template.md {OUTPUT_DIR}/report.md

2. Launch/Auth

Start a named session and launch target app:

agent-device --session {SESSION} open {TARGET_APP} --platform {PLATFORM}
agent-device --session {SESSION} snapshot -i

If login is required:

agent-device --session {SESSION} snapshot -i
agent-device --session {SESSION} fill @e1 "{EMAIL}"
agent-device --session {SESSION} fill @e2 "{PASSWORD}"
agent-device --session {SESSION} press @e3
agent-device --session {SESSION} wait 1000
agent-device --session {SESSION} snapshot -i

For OTP/email codes: ask the user, wait for input, then continue.

3. Orient

Capture initial evidence and navigation anchors:

agent-device --session {SESSION} screenshot {OUTPUT_DIR}/screenshots/initial.png
agent-device --session {SESSION} snapshot -i

Map top-level navigation, tabs, and key workflows before deep testing.

4. Explore

Read references/issue-taxonomy.md for severity/category calibration.

Strategy:

  • Move through each major app area (tabs, drawers, settings pages).
  • Test core journeys end-to-end (create, edit, delete, submit, recover).
  • Validate edge states (empty/error/loading/offline/permissions denied).
  • Use diff snapshot -i after UI transitions to avoid stale refs.
  • Periodically capture logs path and inspect the app log when behavior looks suspicious.

Useful commands per screen:

agent-device --session {SESSION} snapshot -i
agent-device --session {SESSION} screenshot {OUTPUT_DIR}/screenshots/{screen-name}.png
agent-device --session {SESSION} appstate
agent-device --session {SESSION} logs path

5. Document Issues (Repro-First)

Explore and document in one pass. When you find an issue, stop and fully capture evidence before continuing.

Interactive/behavioral issues

Use video + step screenshots:

  1. Start recording:
agent-device --session {SESSION} record start {OUTPUT_DIR}/videos/issue-{NNN}-repro.mp4
  1. Reproduce with visible pacing. Capture each step:
agent-device --session {SESSION} screenshot {OUTPUT_DIR}/screenshots/issue-{NNN}-step-1.png
sleep 1
# perform action
sleep 1
agent-device --session {SESSION} screenshot {OUTPUT_DIR}/screenshots/issue-{NNN}-step-2.png
  1. Capture final broken state:
sleep 2
agent-device --session {SESSION} screenshot {OUTPUT_DIR}/screenshots/issue-{NNN}-result.png
  1. Stop recording:
agent-device --session {SESSION} record stop
  1. Append issue immediately to report with numbered steps and screenshot references.

Static/on-load issues

Single screenshot is sufficient; no video required:

agent-device --session {SESSION} screenshot {OUTPUT_DIR}/screenshots/issue-{NNN}.png

Set Repro Video to N/A in the report.

6. Wrap Up

Target 5-10 well-evidenced issues, then finish:

  1. Reconcile summary severity counts in report.md.
  2. Close session:
agent-device --session {SESSION} close
  1. Report total issues, severity breakdown, and highest-risk findings.

Guidance

  • Repro quality matters more than issue count.
  • Use refs (@eN) for fast exploration, selectors for deterministic replay assertions when needed.
  • Re-snapshot after any mutation (navigation, modal, list update, form submit).
  • Use fill for clear-then-type semantics; use type for incremental typing behavior checks.
  • Keep logs optional and targeted: enable/read app logs only when useful for diagnosis.
  • Never read source code of the app under test; findings must come from observed runtime behavior.
  • Write each issue immediately to avoid losing evidence.
  • Never delete screenshots/videos/report artifacts during a session.

References

Reference When to Read
references/issue-taxonomy.md Start of session; severity/categories/checklist

Templates

Template Purpose
templates/dogfood-report-template.md Copy into output directory as the report file

Discussion

Product Hunt–style comments (not star reviews)
  • No comments yet — start the thread.
general reviews

Ratings

4.825 reviews
  • Camila Liu· Dec 24, 2024

    I recommend dogfood for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Naina Diallo· Dec 4, 2024

    Solid pick for teams standardizing on skills: dogfood is focused, and the summary matches what you get after install.

  • Luis Yang· Nov 15, 2024

    Useful defaults in dogfood — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Yash Thakker· Nov 3, 2024

    Solid pick for teams standardizing on skills: dogfood is focused, and the summary matches what you get after install.

  • Dhruvi Jain· Oct 22, 2024

    dogfood is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Luis Martin· Oct 6, 2024

    Registry listing for dogfood matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Oshnikdeep· Sep 25, 2024

    dogfood fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Neel Zhang· Sep 21, 2024

    We added dogfood from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Ganesh Mohane· Aug 16, 2024

    dogfood has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Neel Rahman· Aug 12, 2024

    dogfood reduced setup friction for our internal harness; good balance of opinion and flexibility.

showing 1-10 of 25

1 / 3