AI SummaryA skill that generates reusable command-line scripts for automating Hugging Face API interactions, useful for developers who need to repeatedly fetch, process, or chain API calls.
Install
# Add to your project root as SKILL.md curl -o SKILL.md "https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-tool-builder/SKILL.md"
Description
Use this skill when the user wants to build tool/scripts or achieve a task where using data from the Hugging Face API would help. This is especially useful when chaining or combining API calls or the task will be repeated/automated. This Skill creates a reusable script to fetch, enrich or process data.
Hugging Face API Tool Builder
Your purpose is now is to create reusable command line scripts and utilities for using the Hugging Face API, allowing chaining, piping and intermediate processing where helpful. You can access the API directly, as well as use the hf command line tool. Model and Dataset cards can be accessed from repositories directly.
Script Rules
Make sure to follow these rules: • Scripts must take a --help command line argument to describe their inputs and outputs • Non-destructive scripts should be tested before handing over to the User • Shell scripts are preferred, but use Python or TSX if complexity or user need requires it. • IMPORTANT: Use the HF_TOKEN environment variable as an Authorization header. For example: curl -H "Authorization: Bearer ${HF_TOKEN}" https://huggingface.co/api/. This provides higher rate limits and appropriate authorization for data access. • Investigate the shape of the API results before commiting to a final design; make use of piping and chaining where composability would be an advantage - prefer simple solutions where possible. • Share usage examples once complete. Be sure to confirm User preferences where there are questions or clarifications needed.
Sample Scripts
Paths below are relative to this skill directory. Reference examples: • references/hf_model_papers_auth.sh — uses HF_TOKEN automatically and chains trending → model metadata → model card parsing with fallbacks; it demonstrates multi-step API usage plus auth hygiene for gated/private content. • references/find_models_by_paper.sh — optional HF_TOKEN usage via --token, consistent authenticated search, and a retry path when arXiv-prefixed searches are too narrow; it shows resilient query strategy and clear user-facing help. • references/hf_model_card_frontmatter.sh — uses the hf CLI to download model cards, extracts YAML frontmatter, and emits NDJSON summaries (license, pipeline tag, tags, gated prompt flag) for easy filtering. Baseline examples (ultra-simple, minimal logic, raw JSON output with HF_TOKEN header): • references/baseline_hf_api.sh — bash • references/baseline_hf_api.py — python • references/baseline_hf_api.tsx — typescript executable Composable utility (stdin → NDJSON): • references/hf_enrich_models.sh — reads model IDs from stdin, fetches metadata per ID, emits one JSON object per line for streaming pipelines. Composability through piping (shell-friendly JSON output): • references/baseline_hf_api.sh 25 | jq -r '.[].id' | references/hf_enrich_models.sh | jq -s 'sort_by(.downloads) | reverse | .[:10]' • references/baseline_hf_api.sh 50 | jq '[.[] | {id, downloads}] | sort_by(.downloads) | reverse | .[:10]' • printf '%s\n' openai/gpt-oss-120b meta-llama/Meta-Llama-3.1-8B | references/hf_model_card_frontmatter.sh | jq -s 'map({id, license, has_extra_gated_prompt})'
High Level Endpoints
The following are the main API endpoints available at https://huggingface.co ` /api/datasets /api/models /api/spaces /api/collections /api/daily_papers /api/notifications /api/settings /api/whoami-v2 /api/trending /oauth/userinfo `
Quality Score
Acceptable
69/100
Trust & Transparency
Open Source — Apache-2.0
Source code publicly auditable
Verified Open Source
Hosted on GitHub — publicly auditable
Actively Maintained
Last commit Yesterday
7.5k stars — Strong Community
438 forks