guidance

davila7/claude-code-templates · updated Apr 8, 2026

$npx skills add https://github.com/davila7/claude-code-templates --skill guidance
0 commentsdiscussion
summary

Use Guidance when you need to:

skill.md

Guidance: Constrained LLM Generation

When to Use This Skill

Use Guidance when you need to:

  • Control LLM output syntax with regex or grammars
  • Guarantee valid JSON/XML/code generation
  • Reduce latency vs traditional prompting approaches
  • Enforce structured formats (dates, emails, IDs, etc.)
  • Build multi-step workflows with Pythonic control flow
  • Prevent invalid outputs through grammatical constraints

GitHub Stars: 18,000+ | From: Microsoft Research

Installation

# Base installation
pip install guidance

# With specific backends
pip install guidance[transformers]  # Hugging Face models
pip install guidance[llama_cpp]     # llama.cpp models

Quick Start

Basic Example: Structured Generation

from guidance import models, gen

# Load model (supports OpenAI, Transformers, llama.cpp)
lm = models.OpenAI("gpt-4")

# Generate with constraints
result = lm + "The capital of France is " + gen("capital", max_tokens=5)

print(result["capital"])  # "Paris"

With Anthropic Claude

from guidance import models, gen, system, user, assistant

# Configure Claude
lm = models.Anthropic("claude-sonnet-4-5-20250929")

# Use context managers for chat format
with system():
    lm += "You are a helpful assistant."

with user():
    lm += "What is the capital of France?"

with assistant():
    lm += gen(max_tokens=20)

Core Concepts

1. Context Managers

Guidance uses Pythonic context managers for chat-style interactions.

from guidance import system, user, assistant, gen

lm = models.Anthropic("claude-sonnet-4-5-20250929")

# System message
with system():
    lm += "You are a JSON generation expert."

# User message
with user():
    lm += "Generate a person object with name and age."

# Assistant response
with assistant():
    lm += gen("response", max_tokens=100)

print(lm["response"])

Benefits:

  • Natural chat flow
  • Clear role separation
  • Easy to read and maintain

2. Constrained Generation

Guidance ensures outputs match specified patterns using regex or grammars.

Regex Constraints

from guidance import models, gen

lm = models.Anthropic("claude-sonnet-4-5-20250929")

# Constrain to valid email format
lm += "Email: " + gen("email", regex=r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}")

# Constrain to date format (YYYY-MM-DD)
lm += "Date: " + gen("date", regex=r"\d{4}-\d{2}-\d{2}")

# Constrain to phone number
lm += "Phone: " + gen("phone", regex=r"\d{3}-\d{3}-\d{4}")

print(lm["email"])  # Guaranteed valid email
print(lm["date"])   # Guaranteed YYYY-MM-DD format

How it works:

  • Regex converted to grammar at token level
  • Invalid tokens filtered during generation
  • Model can only produce matching outputs

Selection Constraints

from guidance import models, gen, select

lm = models.Anthropic("claude-sonnet-4-5-20250929")

# Constrain to specific choices
lm += "Sentiment: " + select(["positive", "negative", "neutral"], name="sentiment")

# Multiple-choice selection
lm += "Best answer: " + select(
    ["A) Paris", "B) London", "C) Berlin", "D) Madrid"],
    name="answer"
)

print(lm["sentiment"])  # One of: positive, negative, neutral
print(lm["answer"])     # One of: A, B, C, or D

3. Token Healing

Guidance automatically "heals" token boundaries between prompt and generation.

Problem: Tokenization creates unnatural boundaries.

# Without token healing
prompt = "The capital of France is "
# Last token: " is "
# First generated token might be " Par" (with leading space)
# Result: "The capital of France is  Paris" (double space!)

Solution: Guidance backs up one token and regenerates.

from guidance import models, gen

lm = models.Anthropic("claude-sonnet-4-5-20250929")

# Token healing enabled by default
lm += "The capital of France is " + gen("capital", max_tokens=5)
# Result: "The capital of France is Paris" (correct spacing)

Benefits:

  • Natural text boundaries
  • No awkward spacing issues
  • Better model performance (sees natural token sequences)

4. Grammar-Based Generation

Define complex structures using context-free grammars.

from guidance import models, gen

lm = models.Anthropic("claude-sonnet-4-5-20250929")

# JSON grammar (simplified)
json_grammar = """
{
    "name": <gen name regex="[A-Za-z ]+" max_tokens=20>,
    "age": <gen age regex="[0-9]+" max_tokens=3>,
    "email": <gen email regex="[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}" max_tokens=50>
}
"""

# Generate valid JSON
lm += gen("person", grammar=json_grammar)

print(lm["person"])  # Guaranteed valid JSON structure

Use cases:

  • Complex structured outputs
  • Nested data structures
  • Programming language syntax
  • Domain-specific languages

5. Guidance Functions

Create reusable generation patterns with the @guidance decorator.

from guidance import guidance, gen, models

@guidance
def generate_person(lm):
    """Generate a person with name and age."""
    lm += "Name: " + gen("name", max_tokens=20, stop="\n")
    lm += "\nAge: " + gen("age", regex=r"[0-9]+", max_tokens=3)
    return lm

# Use the function
lm = models.Anthropic("claude-sonnet-4-5-20250929")
lm = generate_person(lm)

print(lm["name"])
print(lm["age"])

Stateful Functions:

@guidance(stateless=False)
def react_agent(lm, question, tools, max_rounds=5):
    """ReAct agent with tool use."""
    lm += f"Question: 

Discussion

Product Hunt–style comments (not star reviews)
  • No comments yet — start the thread.
general reviews

Ratings

4.566 reviews
  • Pratham Ware· Dec 28, 2024

    guidance is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Omar Abebe· Dec 16, 2024

    Useful defaults in guidance — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Maya Lopez· Dec 12, 2024

    guidance has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Sofia Abebe· Dec 12, 2024

    Keeps context tight: guidance is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Yash Thakker· Nov 19, 2024

    Keeps context tight: guidance is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Sakura Lopez· Nov 15, 2024

    Solid pick for teams standardizing on skills: guidance is focused, and the summary matches what you get after install.

  • Noah Anderson· Nov 7, 2024

    guidance has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Ren Johnson· Nov 3, 2024

    Useful defaults in guidance — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Charlotte Gill· Nov 3, 2024

    guidance is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Jin Bansal· Oct 26, 2024

    Keeps context tight: guidance is the kind of skill you can hand to a new teammate without a long onboarding doc.

showing 1-10 of 66

1 / 7