AI/ML

llm-tuning-patterns

parcadei/continuous-claude-v3 · updated Apr 8, 2026

$npx skills add https://github.com/parcadei/continuous-claude-v3 --skill llm-tuning-patterns
summary

Evidence-based patterns for configuring LLM parameters, based on APOLLO and Godel-Prover research.

skill.md

LLM Tuning Patterns

Evidence-based patterns for configuring LLM parameters, based on APOLLO and Godel-Prover research.

Pattern

Different tasks require different LLM configurations. Use these evidence-based settings.

Theorem Proving / Formal Reasoning

Based on APOLLO parity analysis:

Parameter Value Rationale
max_tokens 4096 Proofs need space for chain-of-thought
temperature 0.6 Higher creativity for tactic exploration
top_p 0.95 Allow diverse proof paths

Proof Plan Prompt

Always request a proof plan before tactics:

Given the theorem to prove:
[theorem statement]

First, write a high-level proof plan explaining your approach.
Then, suggest Lean 4 tactics to implement each step.

The proof plan (chain-of-thought) significantly improves tactic quality.

Parallel Sampling

For hard proofs, use parallel sampling:

  • Generate N=8-32 candidate proof attempts
  • Use best-of-N selection
  • Each sample at temperature 0.6-0.8

Code Generation

Parameter Value Rationale
max_tokens 2048 Sufficient for most functions
temperature 0.2-0.4 Prefer deterministic output

Creative / Exploration Tasks

Parameter Value Rationale
max_tokens 4096 Space for exploration
temperature 0.8-1.0 Maximum creativity

Anti-Patterns

  • Too low tokens for proofs: 512 tokens truncates chain-of-thought
  • Too low temperature for proofs: 0.2 misses creative tactic paths
  • No proof plan: Jumping to tactics without planning reduces success rate

Source Sessions

  • This session: APOLLO parity - increased max_tokens 512->4096, temp 0.2->0.6
  • This session: Added proof plan prompt for chain-of-thought before tactics
general reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    llm-tuning-patterns is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Piyush G· Sep 9, 2024

    Keeps context tight: llm-tuning-patterns is the kind of skill you can hand to a new teammate without a long onboarding doc.

  • Chaitanya Patil· Aug 8, 2024

    Registry listing for llm-tuning-patterns matched our evaluation — installs cleanly and behaves as described in the markdown.

  • Sakshi Patil· Jul 7, 2024

    llm-tuning-patterns reduced setup friction for our internal harness; good balance of opinion and flexibility.

  • Ganesh Mohane· Jun 6, 2024

    I recommend llm-tuning-patterns for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Oshnikdeep· May 5, 2024

    Useful defaults in llm-tuning-patterns — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Dhruvi Jain· Apr 4, 2024

    llm-tuning-patterns has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Rahul Santra· Mar 3, 2024

    Solid pick for teams standardizing on skills: llm-tuning-patterns is focused, and the summary matches what you get after install.

  • Pratham Ware· Feb 2, 2024

    We added llm-tuning-patterns from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.

  • Yash Thakker· Jan 1, 2024

    llm-tuning-patterns fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.