github BerriAI/litellm v1.49.2

23 hours ago

What's Changed

  • Add literalai in the sidebar observability category by @willydouhard in #6163
  • Search across docs, GitHub issues, and discussions by @yujonglee in #6160
  • Feat: Add Langtrace integration by @alizenhom in #5341
  • (perf) move s3 logging to Batch logging + async [94% faster perf under 100 RPS on 1 litellm instance] by @ishaan-jaff in #6165
  • (fix) add azure/gpt-4o-2024-05-13 pricing by @ishaan-jaff in #6174
  • LiteLLM Minor Fixes & Improvements (10/10/2024) by @krrishdholakia in #6158
  • (fix) batch_completion fails with bedrock due to extraneous [max_workers] key by @ishaan-jaff in #6176
  • (fix) provider wildcard routing - when models specificed without provider prefix by @ishaan-jaff in #6173

New Contributors

Full Changelog: v1.49.1...v1.49.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.2

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 230.0 263.3150249465347 6.123838960549251 0.0 1833 0 205.50188100003197 2676.1843779999026
Aggregated Passed ✅ 230.0 263.3150249465347 6.123838960549251 0.0 1833 0 205.50188100003197 2676.1843779999026

Don't miss a new litellm release

NewReleases is sending notifications on new releases.