generate

alirezarezvani/claude-skills · updated Apr 8, 2026

$npx skills add https://github.com/alirezarezvani/claude-skills --skill generate
0 commentsdiscussion
summary

Generate production-ready Playwright tests from a user story, URL, component name, or feature description.

skill.md

Generate Playwright Tests

Generate production-ready Playwright tests from a user story, URL, component name, or feature description.

Input

$ARGUMENTS contains what to test. Examples:

  • "user can log in with email and password"
  • "the checkout flow"
  • "src/components/UserProfile.tsx"
  • "the search page with filters"

Steps

1. Understand the Target

Parse $ARGUMENTS to determine:

  • User story: Extract the behavior to verify
  • Component path: Read the component source code
  • Page/URL: Identify the route and its elements
  • Feature name: Map to relevant app areas

2. Explore the Codebase

Use the Explore subagent to gather context:

  • Read playwright.config.ts for testDir, baseURL, projects
  • Check existing tests in testDir for patterns, fixtures, and conventions
  • If a component path is given, read the component to understand its props, states, and interactions
  • Check for existing page objects in pages/
  • Check for existing fixtures in fixtures/
  • Check for auth setup (auth.setup.ts or storageState config)

3. Select Templates

Check templates/ in this plugin for matching patterns:

If testing... Load template from
Login/auth flow templates/auth/login.md
CRUD operations templates/crud/
Checkout/payment templates/checkout/
Search/filter UI templates/search/
Form submission templates/forms/
Dashboard/data templates/dashboard/
Settings page templates/settings/
Onboarding flow templates/onboarding/
API endpoints templates/api/
Accessibility templates/accessibility/

Adapt the template to the specific app — replace {{placeholders}} with actual selectors, URLs, and data.

4. Generate the Test

Follow these rules:

Structure:

import { test, expect } from '@playwright/test';
// Import custom fixtures if the project uses them

test.describe('Feature Name', () => {
  // Group related behaviors

  test('should <expected behavior>', async ({ page }) => {
    // Arrange: navigate, set up state
    // Act: perform user action
    // Assert: verify outcome
  });
});

Locator priority (use the first that works):

  1. getByRole() — buttons, links, headings, form elements
  2. getByLabel() — form fields with labels
  3. getByText() — non-interactive text content
  4. getByPlaceholder() — inputs with placeholder text
  5. getByTestId() — when semantic options aren't available

Assertions — always web-first:

// GOOD — auto-retries
await expect(page.getByRole('heading')).toBeVisible();
await expect(page.getByRole('alert')).toHaveText('Success');

// BAD — no retry
const text = await page.textContent('.msg');
expect(text).toBe('Success');

Never use:

  • page.waitForTimeout()
  • page.$(selector) or page.$$(selector)
  • Bare CSS selectors unless absolutely necessary
  • page.evaluate() for things locators can do

Always include:

  • Descriptive test names that explain the behavior
  • Error/edge case tests alongside happy path
  • Proper await on every Playwright call
  • baseURL-relative navigation (page.goto('/') not page.goto('http://...'))

5. Match Project Conventions

  • If project uses TypeScript → generate .spec.ts
  • If project uses JavaScript → generate .spec.js with require() imports
  • If project has page objects → use them instead of inline locators
  • If project has custom fixtures → import and use them
  • If project has a test data directory → create test data files there

6. Generate Supporting Files (If Needed)

  • Page object: If the test touches 5+ unique locators on one page, create a page object
  • Fixture: If the test needs shared setup (auth, data), create or extend a fixture
  • Test data: If the test uses structured data, create a JSON file in test-data/

7. Verify

Run the generated test:

npx playwright test <generated-file> --reporter=list

If it fails:

  1. Read the error
  2. Fix the test (not the app)
  3. Run again
  4. If it's an app issue, report it to the user

Output

  • Generated test file(s) with path
  • Any supporting files created (page objects, fixtures, data)
  • Test run result
  • Coverage note: what behaviors are now tested

Discussion

Product Hunt–style comments (not star reviews)
  • No comments yet — start the thread.
general reviews

Ratings

4.538 reviews
  • Min Ghosh· Dec 20, 2024

    Keeps context tight: generate is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Xiao Ghosh· Dec 16, 2024

    generate fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Nia Mehta· Dec 16, 2024

    Registry listing for generate matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Advait Gupta· Nov 7, 2024

    generate is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Nia Jain· Nov 7, 2024

    generate reduced setup friction for our internal harness; good balance of opinion and flexibility.

  • Rahul Santra· Nov 3, 2024

    Keeps context tight: generate is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Advait Desai· Oct 26, 2024

    Solid pick for teams standardizing on skills: generate is focused, and the summary matches what you get after install.

  • Kaira Park· Oct 26, 2024

    We added generate from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Pratham Ware· Oct 22, 2024

    I recommend generate for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Sakshi Patil· Sep 9, 2024

    generate reduced setup friction for our internal harness; good balance of opinion and flexibility.

showing 1-10 of 38

1 / 4