⚡️ New Inference Provider: OVHcloud AI Endpoints
OVHcloud AI Endpoints is now an official Inference Provider on Hugging Face! 🎉
OVHcloud delivers fast, production ready inference on secure, sovereign, fully 🇪🇺 European infrastructure - combining advanced features with competitive pricing.
import os
from huggingface_hub import InferenceClient
client = InferenceClient(
api_key=os.environ["HF_TOKEN"],
)
completion = client.chat.completions.create(
model="openai/gpt-oss-20b:ovhcloud",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)More snippets examples in the provider documentation 👉 here.
QoL Improvements
Installing the CLI is now much faster, thanks to @Boulaouaney for adding support for uv, bringing faster package installation.
- Add uv support to installation scripts for faster package installation in #3486 by @Boulaouaney
Bug Fixes
This release also includes the following bug fixes:
- [Collections] Add collections to collections by slug id in #3551 by @hanouticelina
- [CLI] Respect
HF_DEBUGenvironment variable in #3562 by @hanouticelina - [Inference] fix zero shot classification output parsing in #3561 by @hanouticelina