Skill

hugging-face-jobs

by huggingface

AI Summary

This skill enables users to run Python workloads, Docker jobs, and GPU-intensive tasks on Hugging Face's managed infrastructure without local setup. It's valuable for ML engineers, data scientists, and developers needing cloud compute for training, inference, and batch processing.

Install

# Add to your project root as SKILL.md
curl -o SKILL.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/SKILL.md"

Description

This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks. Should be invoked for tasks involving cloud compute, GPU workloads, or when users mention running jobs on Hugging Face infrastructure without local setup.

Overview

Run any workload on fully managed Hugging Face infrastructure. No local setup required—jobs run on cloud CPUs, GPUs, or TPUs and can persist results to the Hugging Face Hub. Common use cases: • Data Processing - Transform, filter, or analyze large datasets • Batch Inference - Run inference on thousands of samples • Experiments & Benchmarks - Reproducible ML experiments • Model Training - Fine-tune models (see model-trainer skill for TRL-specific training) • Synthetic Data Generation - Generate datasets using LLMs • Development & Testing - Test code without local GPU setup • Scheduled Jobs - Automate recurring tasks For model training specifically: See the model-trainer skill for TRL-based training workflows.

Prerequisites Checklist

Before starting any job, verify:

✅ **Token Usage** (See Token Usage section for details)

When tokens are required: • Pushing models/datasets to Hub • Accessing private repositories • Using Hub APIs in scripts • Any authenticated Hub operations How to provide tokens: `python { "secrets": {"HF_TOKEN": "$HF_TOKEN"} # Recommended: automatic token } ` ⚠️ CRITICAL: The $HF_TOKEN placeholder is automatically replaced with your logged-in token. Never hardcode tokens in scripts.

When to Use This Skill

Use this skill when users want to: • Run Python workloads on cloud infrastructure • Execute jobs without local GPU/TPU setup • Process data at scale • Run batch inference or experiments • Schedule recurring tasks • Use GPUs/TPUs for any workload • Persist results to the Hugging Face Hub

Quality Score

B

Good

84/100

Standard Compliance75
Documentation Quality72
Usefulness85
Maintenance Signal100
Community Signal100
Scored Today

GitHub Signals

Stars7.5k
Forks438
Issues19
UpdatedYesterday
View on GitHub

Trust & Transparency

Open Source — Apache-2.0

Source code publicly auditable

Verified Open Source

Hosted on GitHub — publicly auditable

Actively Maintained

Last commit Yesterday

7.5k stars — Strong Community

438 forks

My Fox Den

Community Rating

Works With

Claude Code