Productivity

hugging-face-trackio

huggingface/skills · updated Apr 8, 2026

$npx skills add https://github.com/huggingface/skills --skill hugging-face-trackio
summary

Trackio is an experiment tracking library for logging and visualizing ML training metrics. It syncs to Hugging Face Spaces for real-time monitoring dashboards.

skill.md

Trackio - Experiment Tracking for ML Training

Trackio is an experiment tracking library for logging and visualizing ML training metrics. It syncs to Hugging Face Spaces for real-time monitoring dashboards.

Three Interfaces

Task Interface Reference
Logging metrics during training Python API references/logging_metrics.md
Firing alerts for training diagnostics Python API references/alerts.md
Retrieving metrics & alerts after/during training CLI references/retrieving_metrics.md

When to Use Each

Python API → Logging

Use import trackio in your training scripts to log metrics:

  • Initialize tracking with trackio.init()
  • Log metrics with trackio.log() or use TRL's report_to="trackio"
  • Finalize with trackio.finish()

Key concept: For remote/cloud training, pass space_id — metrics sync to a Space dashboard so they persist after the instance terminates.

→ See references/logging_metrics.md for setup, TRL integration, and configuration options.

Python API → Alerts

Insert trackio.alert() calls in training code to flag important events — like inserting print statements for debugging, but structured and queryable:

  • trackio.alert(title="...", level=trackio.AlertLevel.WARN) — fire an alert
  • Three severity levels: INFO, WARN, ERROR
  • Alerts are printed to terminal, stored in the database, shown in the dashboard, and optionally sent to webhooks (Slack/Discord)

Key concept for LLM agents: Alerts are the primary mechanism for autonomous experiment iteration. An agent should insert alerts into training code for diagnostic conditions (loss spikes, NaN gradients, low accuracy, training stalls). Since alerts are printed to the terminal, an agent that is watching the training script's output will see them automatically. For background or detached runs, the agent can poll via CLI instead.

→ See references/alerts.md for the full alerts API, webhook setup, and autonomous agent workflows.

CLI → Retrieving

Use the trackio command to query logged metrics and alerts:

  • trackio list projects/runs/metrics — discover what's available
  • trackio get project/run/metric — retrieve summaries and values
  • trackio list alerts --project <name> --json — retrieve alerts
  • trackio show — launch the dashboard
  • trackio sync — sync to HF Space

Key concept: Add --json for programmatic output suitable for automation and LLM agents.

→ See references/retrieving_metrics.md for all commands, workflows, and JSON output formats.

Minimal Logging Setup

import trackio

trackio.init(project="my-project", space_id="username/trackio")
trackio.log({"loss": 0.1, "accuracy": 0.9})
trackio.log({"loss": 0.09, "accuracy": 0.91})
trackio.finish()

Minimal Retrieval

trackio list projects --json
trackio get metric --project my-project --run my-run --metric loss --json

Autonomous ML Experiment Workflow

When running experiments autonomously as an LLM agent, the recommended workflow is:

  1. Set up training with alerts — insert trackio.alert() calls for diagnostic conditions
  2. Launch training — run the script in the background
  3. Poll for alerts — use trackio list alerts --project <name> --json --since <timestamp> to check for new alerts
  4. Read metrics — use trackio get metric ... to inspect specific values
  5. Iterate — based on alerts and metrics, stop the run, adjust hyperparameters, and launch a new run
import trackio

trackio.init(project="my-project", config={"lr": 1e-4})

for step in range(num_steps):
    loss = train_step()
    trackio.log({"loss": loss, "step": step})

    if step > 100 and loss > 5.0:
        trackio.alert(
            title="Loss divergence",
            text=f"Loss {loss:.4f} still high after {step} steps",
            level=trackio.AlertLevel.ERROR,
        )
    if step > 0 and abs(loss) < 1e-8:
        trackio.alert(
            title="Vanishing loss",
            text="Loss near zero — possible gradient collapse",
            level=trackio.AlertLevel.WARN,
        )

trackio.finish()

Then poll from a separate terminal/process:

trackio list alerts --project my-project --json --since "2025-01-01T00:00:00"
general reviews

Ratings

4.837 reviews
  • Pratham Ware· Dec 8, 2024

    hugging-face-trackio reduced setup friction for our internal harness; good balance of opinion and flexibility.

  • Henry Diallo· Dec 8, 2024

    hugging-face-trackio is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Yash Thakker· Nov 27, 2024

    I recommend hugging-face-trackio for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.

  • Camila Gonzalez· Nov 27, 2024

    Solid pick for teams standardizing on skills: hugging-face-trackio is focused, and the summary matches what you get after install.

  • Dhruvi Jain· Oct 18, 2024

    Useful defaults in hugging-face-trackio — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.

  • Luis Mehta· Oct 18, 2024

    hugging-face-trackio has been reliable in day-to-day use. Documentation quality is above average for community skills.

  • Charlotte Menon· Sep 17, 2024

    Solid pick for teams standardizing on skills: hugging-face-trackio is focused, and the summary matches what you get after install.

  • Oshnikdeep· Sep 9, 2024

    hugging-face-trackio is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.

  • Luis Singh· Sep 9, 2024

    hugging-face-trackio fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.

  • Diya Ndlovu· Sep 9, 2024

    Registry listing for hugging-face-trackio matched our evaluation — installs cleanly and behaves as described in the markdown.

showing 1-10 of 37

1 / 4