github huggingface/huggingface_hub v1.3.0.rc0
v1.3.0: New CLI Commands for Hub Discovery, Jobs Monitoring and more!

latest releases: v1.3.1, v1.3.0
pre-release2 days ago

🖥️ CLI: hf models, hf datasets, hf spaces Commands

The CLI has been reorganized with dedicated commands for Hub discovery, while hf repo stays focused on managing your own repositories.

New commands:

# Models
hf models ls --author=Qwen --limit=10
hf models info Qwen/Qwen-Image-2512

# Datasets
hf datasets ls --filter "format:parquet" --sort=downloads
hf datasets info HuggingFaceFW/fineweb

# Spaces
hf spaces ls --search "3d"
hf spaces info enzostvs/deepsite

This organization mirrors the Python API (list_models, model_info, etc.), keeps the hf <resource> <action> pattern, and is extensible for future commands like hf papers or hf collections.

🔧 Transformers CLI Installer

You can now install the transformers CLI alongside the huggingface_hub CLI using the standalone installer scripts.

# Install hf CLI only (default)
curl -LsSf https://hf.co/cli/install.sh | bash -s

# Install both hf and transformers CLIs
curl -LsSf https://hf.co/cli/install.sh | bash -s -- --with-transformers
# Install hf CLI only (default)
powershell -c "irm https://hf.co/cli/install.ps1 | iex"

# Install both hf and transformers CLIs
powershell -c "irm https://hf.co/cli/install.ps1 | iex" -WithTransformers

Once installed, you can use the transformers CLI directly:

transformers serve
transformers chat openai/gpt-oss-120b

📊 Jobs Monitoring

New hf jobs stats command to monitor your running jobs in real-time, similar to docker stats. It displays a live table with CPU, memory, network, and GPU usage.

>>> hf jobs stats
JOB ID                   CPU % NUM CPU MEM % MEM USAGE      NET I/O         GPU UTIL % GPU MEM % GPU MEM USAGE
------------------------ ----- ------- ----- -------------- --------------- ---------- --------- ---------------
6953ff6274100871415c13fd 0%    3.5     0.01% 1.3MB / 15.0GB 0.0bps / 0.0bps 0%         0.0%      0.0B / 22.8GB

A new HfApi.fetch_jobs_metrics() method is also available:

>>> for metrics in fetch_job_metrics(job_id="6953ff6274100871415c13fd"):
...     print(metrics)
{
    "cpu_usage_pct": 0,
    "cpu_millicores": 3500,
    "memory_used_bytes": 1306624,
    "memory_total_bytes": 15032385536,
    "rx_bps": 0,
    "tx_bps": 0,
    "gpus": {
        "882fa930": {
            "utilization": 0,
            "memory_used_bytes": 0,
            "memory_total_bytes": 22836000000
        }
    },
    "replica": "57vr7"
}
  • [Jobs] Monitor cpu, memory, network and gpu (if any) by @lhoestq in #3655

💔 Breaking Change

The direction parameter in list_models, list_datasets, and list_spaces is now deprecated and not used. The sorting is always descending.

🔧 Other QoL Improvements

📖 Documentation

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes

🏗️ Internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

Don't miss a new huggingface_hub release

NewReleases is sending notifications on new releases.