Skip to content
Skill

hugging-face-jobs

by huggingface

AI Summary

This skill enables users to run Python workloads, Docker jobs, and GPU-intensive tasks on Hugging Face's managed infrastructure without local setup. It's valuable for ML engineers, data scientists, and developers needing cloud compute for training, inference, and batch processing.

Install

Copy this and paste it into Claude Code, Cursor, or any AI assistant:

I want to install the "hugging-face-jobs" skill in my project.

Please run this command in my terminal:
# Install skill into the correct directory (9 files)
mkdir -p .claude/skills/hugging-face-jobs && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/hugging-face-jobs/SKILL.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/SKILL.md" && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/hugging-face-jobs/index.html "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/index.html" && mkdir -p .claude/skills/hugging-face-jobs/references && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/hugging-face-jobs/references/hardware_guide.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/references/hardware_guide.md" && mkdir -p .claude/skills/hugging-face-jobs/references && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/hugging-face-jobs/references/hub_saving.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/references/hub_saving.md" && mkdir -p .claude/skills/hugging-face-jobs/references && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/hugging-face-jobs/references/token_usage.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/references/token_usage.md" && mkdir -p .claude/skills/hugging-face-jobs/references && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/hugging-face-jobs/references/troubleshooting.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/references/troubleshooting.md" && mkdir -p .claude/skills/hugging-face-jobs/scripts && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/hugging-face-jobs/scripts/cot-self-instruct.py "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/scripts/cot-self-instruct.py" && mkdir -p .claude/skills/hugging-face-jobs/scripts && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/hugging-face-jobs/scripts/finepdfs-stats.py "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/scripts/finepdfs-stats.py" && mkdir -p .claude/skills/hugging-face-jobs/scripts && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/hugging-face-jobs/scripts/generate-responses.py "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-jobs/scripts/generate-responses.py"

Then restart Claude Code (or reload the window in Cursor) so the skill is picked up.

Description

This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks. Should be invoked for tasks involving cloud compute, GPU workloads, or when users mention running jobs on Hugging Face infrastructure without local setup.

Overview

Run any workload on fully managed Hugging Face infrastructure. No local setup required—jobs run on cloud CPUs, GPUs, or TPUs and can persist results to the Hugging Face Hub. Common use cases: • Data Processing - Transform, filter, or analyze large datasets • Batch Inference - Run inference on thousands of samples • Experiments & Benchmarks - Reproducible ML experiments • Model Training - Fine-tune models (see model-trainer skill for TRL-specific training) • Synthetic Data Generation - Generate datasets using LLMs • Development & Testing - Test code without local GPU setup • Scheduled Jobs - Automate recurring tasks For model training specifically: See the model-trainer skill for TRL-based training workflows.

Prerequisites Checklist

Before starting any job, verify:

✅ **Token Usage** (See Token Usage section for details)

When tokens are required: • Pushing models/datasets to Hub • Accessing private repositories • Using Hub APIs in scripts • Any authenticated Hub operations How to provide tokens: `python { "secrets": {"HF_TOKEN": "$HF_TOKEN"} # Recommended: automatic token } ` ⚠️ CRITICAL: The $HF_TOKEN placeholder is automatically replaced with your logged-in token. Never hardcode tokens in scripts.

When to Use This Skill

Use this skill when users want to: • Run Python workloads on cloud infrastructure • Execute jobs without local GPU/TPU setup • Process data at scale • Run batch inference or experiments • Schedule recurring tasks • Use GPUs/TPUs for any workload • Persist results to the Hugging Face Hub

Discussion

0/2000
Loading comments...

Health Signals

MaintenanceCommitted 1mo ago
Active
Adoption1K+ stars on GitHub
8.5k ★ · Popular
DocsREADME + description
Well-documented

GitHub Signals

Stars8.5k
Forks502
Issues21
Updated1mo ago
View on GitHub
Apache-2.0 License

My Fox Den

Community Rating

Sign in to rate this booster

Works With

Claude Code