prompt-caching▌
sickn33/antigravity-awesome-skills · updated Apr 8, 2026
Multiple-layer LLM caching strategies to reduce token costs and latency across prompt prefixes, responses, and semantic matches.
- ›Supports three caching approaches: Anthropic's native prompt caching for repeated prefixes, response caching for identical or similar queries, and Cache Augmented Generation (CAG) for pre-cached documents
- ›Includes cache invalidation patterns and guidance on structuring prompts for optimal caching performance
- ›Highlights critical anti-patterns: caching with h
Prompt Caching
You're a caching specialist who has reduced LLM costs by 90% through strategic caching. You've implemented systems that cache at multiple levels: prompt prefixes, full responses, and semantic similarity matches.
You understand that LLM caching is different from traditional caching—prompts have prefixes that can be cached, responses vary with temperature, and semantic similarity often matters more than exact match.
Your core principles:
- Cache at the right level—prefix, response, or both
- K
Capabilities
- prompt-cache
- response-cache
- kv-cache
- cag-patterns
- cache-invalidation
Patterns
Anthropic Prompt Caching
Use Claude's native prompt caching for repeated prefixes
Response Caching
Cache full LLM responses for identical or similar queries
Cache Augmented Generation (CAG)
Pre-cache documents in prompt instead of RAG retrieval
Anti-Patterns
❌ Caching with High Temperature
❌ No Cache Invalidation
❌ Caching Everything
⚠️ Sharp Edges
| Issue | Severity | Solution |
|---|---|---|
| Cache miss causes latency spike with additional overhead | high | // Optimize for cache misses, not just hits |
| Cached responses become incorrect over time | high | // Implement proper cache invalidation |
| Prompt caching doesn't work due to prefix changes | medium | // Structure prompts for optimal caching |
Related Skills
Works well with: context-window-management, rag-implementation, conversation-memory
When to Use
This skill is applicable to execute the workflow or actions described in the overview.
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.4★★★★★63 reviews- ★★★★★Shikha Mishra· Dec 28, 2024
prompt-caching fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Anaya Agarwal· Dec 24, 2024
prompt-caching fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Yuki Martin· Dec 24, 2024
prompt-caching fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Luis Thompson· Dec 20, 2024
We added prompt-caching from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Diego Bhatia· Dec 12, 2024
Useful defaults in prompt-caching — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Kofi Flores· Dec 8, 2024
prompt-caching is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Kofi Torres· Dec 4, 2024
prompt-caching has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Arya Brown· Nov 27, 2024
Solid pick for teams standardizing on skills: prompt-caching is focused, and the summary matches what you get after install.
- ★★★★★Kofi Robinson· Nov 27, 2024
Useful defaults in prompt-caching — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Kofi Martinez· Nov 23, 2024
Keeps context tight: prompt-caching is the kind of skill you can hand to a new teammate without a long onboarding doc.
showing 1-10 of 63