github huggingface/huggingface_hub v0.33.0.rc0
[v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!

latest releases: v0.34.4, v0.34.3, v0.34.2...
pre-release2 months ago

⚡ New provider: Featherless.AI

Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="featherless-ai")

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1-0528", 
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ], 
)

print(completion.choices[0].message)
  • ✨ Support for Featherless.ai as inference provider by @pohnean in #3081

⚡ New provider: Groq

At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.

Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="groq")

completion = client.chat.completions.create(
    model="meta-llama/Llama-4-Scout-17B-16E-Instruct",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe this image in one sentence."},
                {
                    "type": "image_url",
                    "image_url": {"url": "https://vagabundler.com/wp-content/uploads/2019/06/P3160166-Copy.jpg"},
                },
            ],
        }
    ],
)

print(completion.choices[0].message)

🤖 MCP and Tiny-agents

It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!

Fixing some DX issues in the tiny-agents CLI.

📚 Documentation

New translation from the Hindi-speaking community, for the community!

🛠️ Small fixes and maintenance

😌 QoL improvements

🐛 Bug and typo fixes

🏗️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

Don't miss a new huggingface_hub release

NewReleases is sending notifications on new releases.