github BerriAI/litellm v1.57.8

7 hours ago

What's Changed

  • (proxy latency/perf fix - user_api_key_auth) - use asyncio.create task for caching virtual key once it's validated by @ishaan-jaff in #7676
  • (litellm sdk - perf improvement) - optimize response_cost_calculator by @ishaan-jaff in #7674
  • (litellm sdk - perf improvement) - use O(1) set lookups for checking llm providers / models by @ishaan-jaff in #7672
  • (litellm sdk - perf improvement) - optimize pre_call_check by @ishaan-jaff in #7673
  • [integrations/lunary] allow to pass custom parent run id to LLM calls by @hughcrt in #7651
  • LiteLLM Minor Fixes & Improvements (01/10/2025) - p1 by @krrishdholakia in #7670
  • (performance improvement - litellm sdk + proxy) - ensure litellm does not create unnecessary threads when running async functions by @ishaan-jaff in #7680
  • (litellm proxy perf) - pass num_workers cli arg to uvicorn when num_workers is specified by @ishaan-jaff in #7681
  • fix proxy pre call hook - only use asyncio.create_task if user opts into alerting by @ishaan-jaff in #7683
  • [Bug fix]: Proxy Auth Layer - Allow Azure Realtime routes as llm_api_routes by @ishaan-jaff in #7684

Full Changelog: v1.57.7...v1.57.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.57.8

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 225.29799695056985 6.153370698253471 0.0 1841 0 177.73327700001573 2088.13791099999
Aggregated Passed ✅ 210.0 225.29799695056985 6.153370698253471 0.0 1841 0 177.73327700001573 2088.13791099999

Don't miss a new litellm release

NewReleases is sending notifications on new releases.