llm-tuning-patterns▌
parcadei/continuous-claude-v3 · updated Apr 8, 2026
Evidence-based patterns for configuring LLM parameters, based on APOLLO and Godel-Prover research.
LLM Tuning Patterns
Evidence-based patterns for configuring LLM parameters, based on APOLLO and Godel-Prover research.
Pattern
Different tasks require different LLM configurations. Use these evidence-based settings.
Theorem Proving / Formal Reasoning
Based on APOLLO parity analysis:
| Parameter | Value | Rationale |
|---|---|---|
| max_tokens | 4096 | Proofs need space for chain-of-thought |
| temperature | 0.6 | Higher creativity for tactic exploration |
| top_p | 0.95 | Allow diverse proof paths |
Proof Plan Prompt
Always request a proof plan before tactics:
Given the theorem to prove:
[theorem statement]
First, write a high-level proof plan explaining your approach.
Then, suggest Lean 4 tactics to implement each step.
The proof plan (chain-of-thought) significantly improves tactic quality.
Parallel Sampling
For hard proofs, use parallel sampling:
- Generate N=8-32 candidate proof attempts
- Use best-of-N selection
- Each sample at temperature 0.6-0.8
Code Generation
| Parameter | Value | Rationale |
|---|---|---|
| max_tokens | 2048 | Sufficient for most functions |
| temperature | 0.2-0.4 | Prefer deterministic output |
Creative / Exploration Tasks
| Parameter | Value | Rationale |
|---|---|---|
| max_tokens | 4096 | Space for exploration |
| temperature | 0.8-1.0 | Maximum creativity |
Anti-Patterns
- Too low tokens for proofs: 512 tokens truncates chain-of-thought
- Too low temperature for proofs: 0.2 misses creative tactic paths
- No proof plan: Jumping to tactics without planning reduces success rate
Source Sessions
- This session: APOLLO parity - increased max_tokens 512->4096, temp 0.2->0.6
- This session: Added proof plan prompt for chain-of-thought before tactics
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.6★★★★★26 reviews- ★★★★★Sakshi Patil· Nov 11, 2024
llm-tuning-patterns is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Chaitanya Patil· Oct 2, 2024
Keeps context tight: llm-tuning-patterns is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Piyush G· Sep 25, 2024
Registry listing for llm-tuning-patterns matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Isabella Sanchez· Sep 21, 2024
Solid pick for teams standardizing on skills: llm-tuning-patterns is focused, and the summary matches what you get after install.
- ★★★★★Shikha Mishra· Aug 16, 2024
llm-tuning-patterns reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Xiao Rao· Aug 12, 2024
llm-tuning-patterns has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Yash Thakker· Jul 7, 2024
I recommend llm-tuning-patterns for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Isabella Sharma· Jul 3, 2024
llm-tuning-patterns fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Arjun Yang· Jul 3, 2024
I recommend llm-tuning-patterns for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Dhruvi Jain· Jun 26, 2024
Useful defaults in llm-tuning-patterns — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
showing 1-10 of 26