What's Changed
Important Changes between v1.50.xx to 1.60.0
def async_log_stream_event
anddef log_stream_event
no longer supported forCustomLoggers
https://docs.litellm.ai/docs/observability/custom_callback. If you want to log stream events usedef async_log_success_event
anddef log_success_event
for logging success stream events
Known Issues
🚨 Detected issue with Langfuse Logging when Langfuse credentials are stored in DB
- Adding gemini-2.0-flash-thinking-exp-01-21 by @marcoaleixo in #8089
- add groq/deepseek-r1-distill-llama-70b by @miraclebakelaser in #8078
- (UI) Fix SpendLogs page - truncate
bedrock
models + showend_user
by @ishaan-jaff in #8118 - UI Fixes - Newly created key does not display on the View Key Page + Updated the validator to allow model editing when
keyTeam.team_alias === "Default Team"
by @ishaan-jaff in #8122 - (Refactor / QA) - Use
LoggingCallbackManager
to append callbacks and ensure no duplicate callbacks are added by @ishaan-jaff in #8112 - (UI) fix adding Vertex Models by @ishaan-jaff in #8129
- Fix json_mode parameter propagation in OpenAILikeChatHandler by @miraclebakelaser in #8133
- Doc updates - add key rotations to docs by @krrishdholakia in #8136
- Enforce default_on guardrails always run + expose new
litellm.disable_no_log_param
param by @krrishdholakia in #8134 - Doc updates + management endpoint fixes by @krrishdholakia in #8138
- New stable release - release notes by @krrishdholakia in #8148
- FEATURE: OpenAI o3-mini by @ventz in #8151
- build: fix model cost map with o3 model pricing by @krrishdholakia in #8153
- (Fixes) OpenAI Streaming Token Counting + Fixes usage track when
litellm.turn_off_message_logging=True
by @ishaan-jaff in #8156 - (UI) Allow adding custom pricing when adding new model by @ishaan-jaff in #8165
- (Feat) add bedrock/deepseek custom import models by @ishaan-jaff in #8132
- Adding Azure OpenAI o3-mini costs & specs by @yigitkonur in #8166
- Adjust model pricing metadata by @yurchik11 in #8147
New Contributors
- @marcoaleixo made their first contribution in #8089
- @yigitkonur made their first contribution in #8166
Full Changelog: v1.59.10...v1.60.0
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.0
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 240.0 | 281.07272626532927 | 6.158354312051399 | 0.0 | 1843 | 0 | 215.79772499995897 | 3928.489000000013 |
Aggregated | Passed ✅ | 240.0 | 281.07272626532927 | 6.158354312051399 | 0.0 | 1843 | 0 | 215.79772499995897 | 3928.489000000013 |