pypi transformers 5.6.0
Release v5.6.0

7 hours ago

Release v5.6.0

New Model additions

OpenAI Privacy Filter

OpenAI Privacy Filter is a bidirectional token-classification model for personally identifiable information (PII) detection and masking in text. It is intended for high-throughput data sanitization workflows where teams need a model that they can run on-premises that is fast, context-aware, and tunable. The model labels an input sequence in a single forward pass, then decodes coherent spans with a constrained Viterbi procedure, predicting probability distributions over 8 privacy-related output categories for each input token.

Links: Documentation

QianfanOCR

Qianfan-OCR is a 4B-parameter end-to-end document intelligence model developed by Baidu that performs direct image-to-text conversion without traditional multi-stage OCR pipelines. It supports a broad range of prompt-driven tasks including structured document parsing, table extraction, chart understanding, document question answering, and key information extraction all within one unified model. The model features a unique "Layout-as-Thought" capability that generates structured layout representations before producing final outputs, making it particularly effective for complex documents with mixed element types.

Links: Documentation | Paper

SAM3-LiteText

SAM3-LiteText is a lightweight variant of SAM3 that replaces the heavy SAM3 text encoder (353M parameters) with a compact MobileCLIP-based text encoder optimized through knowledge distillation, while keeping the SAM3 ViT-H image encoder intact. This reduces text encoder parameters by up to 88% while maintaining segmentation performance comparable to the original model. The model enables efficient vision-language segmentation by addressing the redundancy found in text prompting for segmentation tasks.

Links: Documentation | Paper

SLANet

SLANet and SLANet_plus are lightweight models designed for table structure recognition, focusing on accurately recognizing table structures in documents and natural scenes. The model improves accuracy and inference speed by adopting a CPU-friendly lightweight backbone network PP-LCNet, a high-low-level feature fusion module CSP-PAN, and a feature decoding module SLA Head that aligns structural and positional information. SLANet was developed by Baidu PaddlePaddle Vision Team as part of their table structure recognition solutions.

Links: Documentation

Breaking changes

The internal rotary_fn is no longer registered as a hidden kernel function, so any code referencing self.rotary_fn(...) within an Attention module will break and must be updated to call the function directly instead.

  • 🚨 [Kernels] Fix kernel function registration (#45420) by @vasqu

Serve

The transformers serve command received several enhancements, including a new /v1/completions endpoint for legacy text completion, multimodal support for audio and video inputs, improved tool-calling via parse_response, proper forwarding of tool_calls/tool_call_id fields, a 400 error on model mismatch when the server is pinned to a specific model, and fixes for the response API. Documentation was also updated to cover new serving options such as --compile and --model-timeout.

Vision

Several vision-related bug fixes were applied in this release, including correcting Qwen2.5-VL temporal RoPE scaling for still images, fixing missing/mismatched image processor backends for Emu3 and BLIP, resolving modular image processor class duplication, and preventing accelerate from incorrectly splitting vision encoders in PeVideo/PeAudioVideo models. Image loading performance was also improved by leveraging torchvision's native decode_image in the torchvision backend, yielding up to ~17% speedup over PIL-based loading.

Parallelization

Fixed several bugs affecting distributed training, including silently wrong results or NaN loss with Expert Parallelism, NaN weights on non-rank-0 FSDP processes, and a resize failure in PP-DocLayoutV3; additionally added support for loading adapters with Tensor Parallelism, added MoE to the Gemma4 TP plan, and published documentation for TP training.

Tokenization

Fixed a docstring typo in streamer classes, resolved a Kimi-K2.5 tokenizer regression and _patch_mistral_regex AttributeError, and patched a streaming generation crash for Qwen3VLProcessor caused by incorrect _tokenizer attribute access. Additional housekeeping included moving the GPT-SW3 instruct tokenizer to an internal testing repo and fixing a global state leak in the tokenizer registry during tests.

Cache

Cache handling was improved for Gemma4 and Gemma3n models by dissociating KV state sharing from the Cache class, ensuring KV states are always shared regardless of whether a Cache is used. Additionally, the image cache for Paddle models was updated to align with the latest API.

Audio

Audio models gained vLLM compatibility through targeted fixes across several model implementations, while reliability improvements were also made including exponential back-off retries for audio file downloads, a crash fix in the text-to-speech pipeline when generation configs contain None values, and corrected test failures for Kyutai Speech-To-Text.

Bugfixes and improvements

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @vasqu
    • [Privacy Filter] Add model (#45580)
    • Fix typos (#45574)
    • [Conversion Mapping] Small fixups (#45483)
    • 🚨 [Kernels] Fix kernel function registration (#45420)
    • [Tokenizers] Move gpt sw3 tokenizer out (#45404)
  • @rain-1
    • Add /v1/completions endpoint (OpenAI legacy completions API) to transformers serve (#44558)
  • @zhang-prog
    • Updated the image cache for Paddle models according to the latest API (#45562)
    • [Model] Add SLANet Model Support (#45532)
    • Fix resize failure caused by zero-sized masks in PP-DocLayoutV3 (#45281)
  • @tarekziade
    • fix table update versions (#45544)
    • qa: re-run modular converter when the script itself is modified (#45528)
    • Revert "Fix: modular image processors (#45492)" (#45531)
    • chore(qa): split out mlinter (#45475)
    • typing: rule 15 - checks for tie_word_embeddings presence (#44988)
    • fix: dont download artifacts from the test hub (#45319)
    • refactor(qa): extend extras so ty can run on server modules (#45456)
    • remove cache file from tree (#45392)
    • refactor: display test duration (#45344)
    • http retries on audio file downloads (#45126)
    • chore: added circleci python script to ruff and ty checkers (#45339)
    • tweak checkers output on errors (#45163)
    • fix: leak in tokenizer registry for test_processors (#45318)
    • chore: remove test_hub for now (#45337)
    • fix: hf-doc-builder insallation was failing (#45225)
  • @marvinzh
    • add Qianfan-OCR model definition (#45280)
  • @remi-or
    • [CB] Fix capture of max_seqlen (#45323)
    • [CB] Add per-request logits processors (#45026)
    • [CB] Tweaks to update and minor fixes (#45179)
  • @ydshieh
    • Minor update (#45484)
    • Close file handler (#45187)
    • Add hasattr(torch.backends.cudnn, "conv") to conftest.py (#45263)
    • Fix SmolVLM video processor resize using wrong interpolation after backend refactor (#45258)
    • Fix Qwen2IntegrationTest (#45268)
    • empty (#45261)
    • Fix unexpected TF32 being enabled in testing (#45252)
    • Fix tf32 issue: set torch.backends.cudnn.conv.fp32_precision explicitly. (#45248)
    • Nvidia CI with torch 2.11 (#45243)
    • Update tiny model creation script (#45241)
    • Update get_test_info.py (related to tiny model creation) (#45238)
    • More fix for tiny model creation (#45228)
    • remove unnecessary entries in some auto model mappings (#45224)
  • @NielsRogge
  • @ArthurZucker
    • Fix IndexError with DeepSpeed ZeRO-3 when kernels rotary is active (#45414)
    • Fix Kimi-K2.5 tokenizer regression and _patch_mistral_regex AttributeError (#45359)
    • Fix vllm cis (#45139)
    • Fix pypi release (#45210)
    • update to dev version 5.6.0-dev0
  • @JJJYmmm
    • [inference_fusion] convert conv3d patch embed to linear (#45041)
  • @balvisio
    • Add THD support in ESM (#44145)
  • @onwp
    • Add Turkish (tr) translation for Get Started section (#45158)

Don't miss a new transformers release

NewReleases is sending notifications on new releases.