github BerriAI/litellm v1.51.3

4 hours ago

What's Changed

  • Support specifying exponential backoff retry strategy when calling completions() by @dbczumar in #6520
  • (fix) slack alerting - don't spam the failed cost tracking alert for the same model by @ishaan-jaff in #6543
  • (feat) add XAI ChatCompletion Support by @ishaan-jaff in #6373
  • LiteLLM Minor Fixes & Improvements (10/30/2024) by @krrishdholakia in #6519

Full Changelog: v1.51.2...v1.51.3

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.3

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 200.0 220.3819331893052 6.253936592654308 0.0 1870 0 179.7343989999831 3185.1700670000014
Aggregated Passed ✅ 200.0 220.3819331893052 6.253936592654308 0.0 1870 0 179.7343989999831 3185.1700670000014

Don't miss a new litellm release

NewReleases is sending notifications on new releases.