This patch release fixes a small issue with loading private models from Hugging Face using the token argument.
Install this version with
# Training + Inference
pip install sentence-transformers[train]==3.3.1
# Inference only, use one of:
pip install sentence-transformers==3.3.1
pip install sentence-transformers[onnx-gpu]==3.3.1
pip install sentence-transformers[onnx]==3.3.1
pip install sentence-transformers[openvino]==3.3.1
Details
If you're loading model under this scenario:
- Your model is hosted on Hugging Face.
 - Your model is private.
 - You haven't set the 
HF_TOKENenvironment variable viahuggingface-cli loginor some other approach. - You're passing the 
tokenargument toSentenceTransformerto load the model. 
Then you may have encountered a crash in v3.3.0. This should be resolved now.
All Changes
- [
docs] Fix the prompt link to the training script by @tomaarsen in #3060 - [Fix] Resolve loading private Transformer model in version 3.3.0 by @pesuchin in #3058
 
Full Changelog: v3.3.0...v3.3.1