generate-synthetic-data▌
hamelsmu/evals-skills · updated Apr 8, 2026
Generate diverse, realistic test inputs that cover the failure space of an LLM pipeline.
Generate Synthetic Data
Generate diverse, realistic test inputs that cover the failure space of an LLM pipeline.
Prerequisites
Before generating synthetic data, identify where the pipeline is likely to fail. Ask the user about known failure-prone areas, review existing user feedback, or form hypotheses from available traces. Dimensions (Step 1) must target anticipated failures, not arbitrary variation.
Core Process
Step 1: Define Dimensions
Dimensions are axes of variation specific to your application. Choose dimensions based on where you expect failures.
Dimension 1: [Name] — [What it captures]
Values: [value_a, value_b, value_c, ...]
Dimension 2: [Name] — [What it captures]
Values: [value_a, value_b, value_c, ...]
Dimension 3: [Name] — [What it captures]
Values: [value_a, value_b, value_c, ...]
Example for a real estate assistant:
Feature: what task the user wants
Values: [property search, scheduling, email drafting]
Client Persona: who the user serves
Values: [first-time buyer, investor, luxury buyer]
Scenario Type: query clarity
Values: [well-specified, ambiguous, out-of-scope]
Start with 3 dimensions. Add more only if initial traces reveal failure patterns along new axes.
Step 2: Draft 20 Tuples with the User
A tuple is one combination of dimension values defining a specific test case. Present 20 draft tuples to the user and iterate until they confirm the tuples reflect realistic scenarios. The user's domain knowledge is essential here — they know which combinations actually occur and which are unrealistic.
(Feature: Property Search, Persona: Investor, Scenario: Ambiguous)
(Feature: Scheduling, Persona: First-time Buyer, Scenario: Well-specified)
(Feature: Email Drafting, Persona: Luxury Buyer, Scenario: Out-of-scope)
Step 3: Generate More Tuples with an LLM
Generate 10 random combinations of ({dim1}, {dim2}, {dim3})
for a {your application description}.
The dimensions are:
{dim1}: {description}. Possible values: {values}
{dim2}: {description}. Possible values: {values}
{dim3}: {description}. Possible values: {values}
Output each tuple in the format: ({dim1}, {dim2}, {dim3})
Avoid duplicates. Vary values across dimensions.
Step 4: Convert Each Tuple to a Natural Language Query
Use a separate prompt for this step. Single-step generation (tuples + queries together) produces repetitive phrasing.
We are generating synthetic user queries for a {your application}.
{Brief description of what it does.}
Given:
{dim1}: {value}
{dim2}: {value}
{dim3}: {value}
Write a realistic query that a user might enter. The query should
reflect the specified persona and scenario characteristics.
Example: "{one of your hand-written examples}"
Now generate a new query.
Step 5: Filter for Quality
Review generated queries. Discard and regenerate when:
- Phrasing is awkward or unrealistic
- Content doesn't match the tuple's intent
- Queries are too similar to each other
Optional: use an LLM to rate realism on a 1-5 scale, discard below 3.
Step 6: Run Queries Through the Pipeline
Execute all queries through the full LLM pipeline. Capture complete traces: input, all intermediate steps, tool calls, retrieved docs, final output.
Target: ~100 high-quality, diverse traces. This is a rough heuristic for reaching saturation (where new traces stop revealing new failure categories). The number depends on system complexity.
Sampling Real User Data
When you have real queries available, don't sample randomly. Use stratified sampling:
- Identify high-variance dimensions — read through queries and find ways they differ (length, topic, complexity, presence of constraints).
- Assign labels — for small sets, with the user; for large sets, use K-means clustering on query embeddings.
- Sample from each group — ensures coverage across query types, not just the most common ones.
When both real and synthetic data are available, use synthetic data to fill gaps in underrepresented query types.
Anti-Patterns
- Unstructured generation. Prompting "give me test queries" without the dimension/tuple structure produces generic, repetitive, happy-path examples.
- Single-step generation. Generating tuples and queries in one prompt produces less diverse results than the two-step separation.
- Arbitrary dimensions. Dimensions that don't target failure-prone regions waste test budget.
- Skipping user review of tuples. Without the user validating tuples first, you can't judge whether LLM-generated tuples are realistic.
- Synthetic data when no one can judge realism. If no one can judge whether a synthetic trace is realistic, use real data instead.
- Synthetic data for complex domain-specific content (legal filings, medical records) where LLMs miss structural nuance.
- Synthetic data for low-resource languages or dialects where LLM-generated samples are unrealistic.
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.7★★★★★55 reviews- ★★★★★Hana White· Dec 28, 2024
Solid pick for teams standardizing on skills: generate-synthetic-data is focused, and the summary matches what you get after install.
- ★★★★★Maya Sethi· Dec 28, 2024
We added generate-synthetic-data from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Pratham Ware· Dec 20, 2024
I recommend generate-synthetic-data for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Meera Ghosh· Dec 16, 2024
generate-synthetic-data has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Diya Mensah· Dec 8, 2024
generate-synthetic-data reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Diya Robinson· Nov 19, 2024
Registry listing for generate-synthetic-data matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Camila Khanna· Nov 19, 2024
generate-synthetic-data fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Diya Okafor· Nov 15, 2024
I recommend generate-synthetic-data for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Kaira Ndlovu· Nov 7, 2024
Useful defaults in generate-synthetic-data — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★James Mehta· Oct 26, 2024
generate-synthetic-data is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
showing 1-10 of 55