github BerriAI/litellm v1.73.2.dev1

latest releases: v1.76.2-nightly, v1.76.1.rc.2, v1.76.1.rc.1...
2 months ago

What's Changed

  • VertexAI Anthropic passthrough cost calc fixes + Filter litellm params from request sent to passthrough endpoint by @krrishdholakia in #11992
  • Fix custom pricing logging + Gemini - only use accepted format values + Gemini - cache tools if passing alongside cached content by @krrishdholakia in #11989
  • Fix unpack_defs handling of nested $ref inside anyOf items by @colesmcintosh in #11964
  • #response_format NVIDIA-NIM add response_format to OpenAI parameters … by @shagunb-acn in #12003
  • Add Azure o3-pro Pricing by @marty-sullivan in #11990
  • [Bug Fix] SCIM - Ensure new user roles are applied by @ishaan-jaff in #12015

Full Changelog: v1.73.1-nightly...v1.73.2.dev1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.73.2.dev1

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 267.8382003869747 6.2096771619800935 0.0 1858 0 214.47131599995828 1466.6541370000346
Aggregated Passed ✅ 250.0 267.8382003869747 6.2096771619800935 0.0 1858 0 214.47131599995828 1466.6541370000346

Don't miss a new litellm release

NewReleases is sending notifications on new releases.