mlflow▌
davila7/claude-code-templates · updated Apr 8, 2026
Use MLflow when you need to:
MLflow: ML Lifecycle Management Platform
When to Use This Skill
Use MLflow when you need to:
- Track ML experiments with parameters, metrics, and artifacts
- Manage model registry with versioning and stage transitions
- Deploy models to various platforms (local, cloud, serving)
- Reproduce experiments with project configurations
- Compare model versions and performance metrics
- Collaborate on ML projects with team workflows
- Integrate with any ML framework (framework-agnostic)
Users: 20,000+ organizations | GitHub Stars: 23k+ | License: Apache 2.0
Installation
# Install MLflow
pip install mlflow
# Install with extras
pip install mlflow[extras] # Includes SQLAlchemy, boto3, etc.
# Start MLflow UI
mlflow ui
# Access at http://localhost:5000
Quick Start
Basic Tracking
import mlflow
# Start a run
with mlflow.start_run():
# Log parameters
mlflow.log_param("learning_rate", 0.001)
mlflow.log_param("batch_size", 32)
# Your training code
model = train_model()
# Log metrics
mlflow.log_metric("train_loss", 0.15)
mlflow.log_metric("val_accuracy", 0.92)
# Log model
mlflow.sklearn.log_model(model, "model")
Autologging (Automatic Tracking)
import mlflow
from sklearn.ensemble import RandomForestClassifier
# Enable autologging
mlflow.autolog()
# Train (automatically logged)
model = RandomForestClassifier(n_estimators=100, max_depth=5)
model.fit(X_train, y_train)
# Metrics, parameters, and model logged automatically!
Core Concepts
1. Experiments and Runs
Experiment: Logical container for related runs Run: Single execution of ML code (parameters, metrics, artifacts)
import mlflow
# Create/set experiment
mlflow.set_experiment("my-experiment")
# Start a run
with mlflow.start_run(run_name="baseline-model"):
# Log params
mlflow.log_param("model", "ResNet50")
mlflow.log_param("epochs", 10)
# Train
model = train()
# Log metrics
mlflow.log_metric("accuracy", 0.95)
# Log model
mlflow.pytorch.log_model(model, "model")
# Run ID is automatically generated
print(f"Run ID: {mlflow.active_run().info.run_id}")
2. Logging Parameters
with mlflow.start_run():
# Single parameter
mlflow.log_param("learning_rate", 0.001)
# Multiple parameters
mlflow.log_params({
"batch_size": 32,
"epochs": 50,
"optimizer": "Adam",
"dropout": 0.2
})
# Nested parameters (as dict)
config = {
"model": {
"architecture": "ResNet50",
"pretrained": True
},
"training": {
"lr": 0.001,
"weight_decay": 1e-4
}
}
# Log as JSON string or individual params
for key, value in config.items():
mlflow.log_param(key, str(value))
3. Logging Metrics
with mlflow.start_run():
# Training loop
for epoch in range(NUM_EPOCHS):
train_loss = train_epoch()
val_loss = validate()
# Log metrics at each step
mlflow.log_metric("train_loss", train_loss, step=epoch)
mlflow.log_metric("val_loss", val_loss, step=epoch)
# Log multiple metrics
mlflow.log_metrics({
"train_accuracy": train_acc,
"val_accuracy": val_acc
}, step=epoch)
# Log final metrics (no step)
mlflow.log_metric("final_accuracy", final_acc)
4. Logging Artifacts
with mlflow.start_run():
# Log file
model.save('model.pkl')
mlflow.log_artifact('model.pkl')
# Log directory
os.makedirs('plots', exist_ok=True)
plt.savefig('plots/loss_curve.png')
mlflow.log_artifacts('plots')
# Log text
with open('config.txt', 'w') as f:
f.write(str(config))
mlflow.log_artifact('config.txt')
# Log dict as JSON
mlflow.log_dict({'config': config}, 'config.json')
5. Logging Models
# PyTorch
import mlflow.pytorch
with mlflow.start_run():
model = train_pytorch_model()
mlflow.pytorch.log_model(model, "model")
# Scikit-learn
import mlflow.sklearn
with mlflow.start_run():
model = train_sklearn_model()
mlflow.sklearn.log_model(model, "model")
# Keras/TensorFlow
import mlflow.keras
with mlflow.start_run():
model = train_keras_model()
mlflow.keras.log_model(model, "model")
# HuggingFace Transformers
import mlflow.transformers
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
general reviewsRatings
4.8★★★★★52 reviews- ★★★★★Sofia Ghosh· Dec 24, 2024
mlflow has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Zaid Perez· Dec 16, 2024
Keeps context tight: mlflow is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Isabella Menon· Nov 19, 2024
mlflow fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Carlos Diallo· Nov 15, 2024
Useful defaults in mlflow — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Zaid Mensah· Nov 7, 2024
Registry listing for mlflow matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Amelia Shah· Oct 26, 2024
mlflow reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Maya Robinson· Oct 10, 2024
We added mlflow from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Nia Torres· Oct 6, 2024
I recommend mlflow for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Zaid Wang· Sep 21, 2024
Useful defaults in mlflow — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Charlotte Khan· Sep 17, 2024
I recommend mlflow for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
showing 1-10 of 52
1 / 6