github BerriAI/litellm v1.57.11

latest releases: v1.58.0, v1.57.8-stable
10 hours ago

v1.57.11 - Alpha Release

🚨 This is an alpha release - we've made several performance / RPS improvements to litellm core. If you see any issues please file it https://github.com/BerriAI/litellm/issues

What's Changed

  • (litellm SDK perf improvement) - use verbose_logger.debug and _cached_get_model_info_helper in _response_cost_calculator by @ishaan-jaff in #7720
  • (litellm sdk speedup) - use _model_contains_known_llm_provider in response_cost_calculator to check if the model contains a known litellm provider by @ishaan-jaff in #7721
  • (proxy perf) - only parse request body 1 time per request by @ishaan-jaff in #7722
  • Revert "(proxy perf) - only parse request body 1 time per request" by @ishaan-jaff in #7724
  • add azure o1 pricing by @krrishdholakia in #7715

Full Changelog: v1.57.10...v1.57.11

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.11

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 240.0 270.55759577820237 6.130862160194138 0.0 1835 0 224.79750500002638 1207.8732939999952
Aggregated Passed ✅ 240.0 270.55759577820237 6.130862160194138 0.0 1835 0 224.79750500002638 1207.8732939999952

Don't miss a new litellm release

NewReleases is sending notifications on new releases.