tag

tuning

7 indexed skills · max 10 per page

skills (7)

vector-index-tuning

wshobson/agents · AI/ML

1

Optimize vector index performance across latency, recall, and memory tradeoffs. \n \n Covers HNSW parameter tuning (M, efConstruction, efSearch) with benchmarking templates and automated recommendation logic based on vector count and target recall \n Includes quantization strategies: scalar INT8, product quantization, binary quantization, and FP16 compression with memory estimation tools \n Provides Qdrant collection configuration templates optimized for three scenarios: recall-focused, speed-fo

llm-tuning-patterns

parcadei/continuous-claude-v3 · AI/ML

0

Evidence-based patterns for configuring LLM parameters, based on APOLLO and Godel-Prover research.

vector-index-tuning

sickn33/antigravity-awesome-skills · AI/ML

0

Guide to optimizing vector indexes for production performance.

model-hyperparameter-tuning

aj-geddes/useful-ai-prompts · Productivity

0

Hyperparameter tuning is the process of systematically searching for the best combination of model configuration parameters to maximize performance on validation data.

fine-tuning-with-trl

davila7/claude-code-templates · Productivity

0

TRL provides post-training methods for aligning language models with human preferences.

peft-fine-tuning

davila7/claude-code-templates · Productivity

0

Fine-tune LLMs by training <1% of parameters using LoRA, QLoRA, and 25+ adapter methods.

fine-tuning-expert

jeffallan/claude-skills · Productivity

0

Expert guidance for fine-tuning LLMs with parameter-efficient methods and production optimization. \n \n Covers LoRA, QLoRA, and full fine-tuning workflows with Hugging Face PEFT, including dataset validation, hyperparameter configuration, and adapter merging for deployment \n Provides a complete minimal working example with LoRA setup, training loop, and quantization variants for memory-constrained environments \n Includes five-stage workflow: dataset preparation, method selection, training wit