Skill

hugging-face-trackio

by huggingface

AI Summary

Trackio is an ML experiment tracking library that integrates with Hugging Face to log metrics, visualize training progress, and trigger alerts during model development. It's useful for ML engineers and researchers who need real-time monitoring and experiment management.

Install

# Add to your project root as SKILL.md
curl -o SKILL.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-trackio/SKILL.md"

Description

Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API), firing alerts for training diagnostics, or retrieving/analyzing logged metrics (CLI). Supports real-time dashboard visualization, alerts with webhooks, HF Space syncing, and JSON output for automation.

Trackio - Experiment Tracking for ML Training

Trackio is an experiment tracking library for logging and visualizing ML training metrics. It syncs to Hugging Face Spaces for real-time monitoring dashboards.

Three Interfaces

| Task | Interface | Reference | |------|-----------|-----------| | Logging metrics during training | Python API | references/logging_metrics.md | | Firing alerts for training diagnostics | Python API | references/alerts.md | | Retrieving metrics & alerts after/during training | CLI | references/retrieving_metrics.md |

Python API → Logging

Use import trackio in your training scripts to log metrics: • Initialize tracking with trackio.init() • Log metrics with trackio.log() or use TRL's report_to="trackio" • Finalize with trackio.finish() Key concept: For remote/cloud training, pass space_id — metrics sync to a Space dashboard so they persist after the instance terminates. → See references/logging_metrics.md for setup, TRL integration, and configuration options.

Python API → Alerts

Insert trackio.alert() calls in training code to flag important events — like inserting print statements for debugging, but structured and queryable: • trackio.alert(title="...", level=trackio.AlertLevel.WARN) — fire an alert • Three severity levels: INFO, WARN, ERROR • Alerts are printed to terminal, stored in the database, shown in the dashboard, and optionally sent to webhooks (Slack/Discord) Key concept for LLM agents: Alerts are the primary mechanism for autonomous experiment iteration. An agent should insert alerts into training code for diagnostic conditions (loss spikes, NaN gradients, low accuracy, training stalls). Since alerts are printed to the terminal, an agent that is watching the training script's output will see them automatically. For background or detached runs, the agent can poll via CLI instead. → See references/alerts.md for the full alerts API, webhook setup, and autonomous agent workflows.

Quality Score

C

Acceptable

69/100

Standard Compliance45
Documentation Quality55
Usefulness72
Maintenance Signal100
Community Signal100
Scored Today

GitHub Signals

Stars7.5k
Forks438
Issues19
UpdatedYesterday
View on GitHub

Trust & Transparency

Open Source — Apache-2.0

Source code publicly auditable

Verified Open Source

Hosted on GitHub — publicly auditable

Actively Maintained

Last commit Yesterday

7.5k stars — Strong Community

438 forks

My Fox Den

Community Rating

Works With

Claude Code