Skip to content
Skill

huggingface-jobs

by huggingface

AI Summary

Run any workload on fully managed Hugging Face infrastructure. No local setup required—jobs run on cloud CPUs, GPUs, or TPUs and can persist results to the Hugging Face Hub. Use this skill when users want to: When assisting with jobs:

Install

Copy this and paste it into Claude Code, Cursor, or any AI assistant:

I want to install the "huggingface-jobs" skill in my project.

Please run this command in my terminal:
# Install skill into your project (9 files)
mkdir -p .claude/skills/huggingface-jobs && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/huggingface-jobs/SKILL.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/huggingface-jobs/SKILL.md" && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/huggingface-jobs/index.html "https://raw.githubusercontent.com/huggingface/skills/main/skills/huggingface-jobs/index.html" && mkdir -p .claude/skills/huggingface-jobs/references && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/huggingface-jobs/references/hardware_guide.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/huggingface-jobs/references/hardware_guide.md" && mkdir -p .claude/skills/huggingface-jobs/references && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/huggingface-jobs/references/hub_saving.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/huggingface-jobs/references/hub_saving.md" && mkdir -p .claude/skills/huggingface-jobs/references && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/huggingface-jobs/references/token_usage.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/huggingface-jobs/references/token_usage.md" && mkdir -p .claude/skills/huggingface-jobs/references && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/huggingface-jobs/references/troubleshooting.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/huggingface-jobs/references/troubleshooting.md" && mkdir -p .claude/skills/huggingface-jobs/scripts && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/huggingface-jobs/scripts/cot-self-instruct.py "https://raw.githubusercontent.com/huggingface/skills/main/skills/huggingface-jobs/scripts/cot-self-instruct.py" && mkdir -p .claude/skills/huggingface-jobs/scripts && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/huggingface-jobs/scripts/finepdfs-stats.py "https://raw.githubusercontent.com/huggingface/skills/main/skills/huggingface-jobs/scripts/finepdfs-stats.py" && mkdir -p .claude/skills/huggingface-jobs/scripts && curl --retry 3 --retry-delay 2 --retry-all-errors -o .claude/skills/huggingface-jobs/scripts/generate-responses.py "https://raw.githubusercontent.com/huggingface/skills/main/skills/huggingface-jobs/scripts/generate-responses.py"

Then restart Claude Code (or reload the window in Cursor) so the skill is picked up.

Description

This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks. Should be invoked for tasks involving cloud compute, GPU workloads, or when users mention running jobs on Hugging Face infrastructure without local setup.

Overview

Run any workload on fully managed Hugging Face infrastructure. No local setup required—jobs run on cloud CPUs, GPUs, or TPUs and can persist results to the Hugging Face Hub. Common use cases: • Data Processing - Transform, filter, or analyze large datasets • Batch Inference - Run inference on thousands of samples • Experiments & Benchmarks - Reproducible ML experiments • Model Training - Fine-tune models (see model-trainer skill for TRL-specific training) • Synthetic Data Generation - Generate datasets using LLMs • Development & Testing - Test code without local GPU setup • Scheduled Jobs - Automate recurring tasks For model training specifically: See the model-trainer skill for TRL-based training workflows.

Prerequisites Checklist

Before starting any job, verify:

✅ **Token Usage** (See Token Usage section for details)

When tokens are required: • Pushing models/datasets to Hub • Accessing private repositories • Using Hub APIs in scripts • Any authenticated Hub operations How to provide tokens: `python

When to Use This Skill

Use this skill when users want to: • Run Python workloads on cloud infrastructure • Execute jobs without local GPU/TPU setup • Process data at scale • Run batch inference or experiments • Schedule recurring tasks • Use GPUs/TPUs for any workload • Persist results to the Hugging Face Hub

Discussion

0/2000
Loading comments...

Health Signals

MaintenanceCommitted Today
Active
Adoption1K+ stars on GitHub
10.0k ★ · Popular
DocsREADME + description
Well-documented

GitHub Signals

Stars10.0k
Forks610
Issues26
UpdatedToday
View on GitHub
Apache-2.0 License

My Fox Den

Community Rating

Sign in to rate this booster

Works With

Claude Code