github BerriAI/litellm v1.49.5

11 hours ago

What's Changed

  • (fix) prompt caching cost calculation OpenAI, Azure OpenAI by @ishaan-jaff in #6231
  • (fix) arize handle optional params by @ishaan-jaff in #6243
  • Bump hono from 4.5.8 to 4.6.5 in /litellm-js/spend-logs by @dependabot in #6245
  • (refactor) caching - use _sync_set_cache by @ishaan-jaff in #6224
  • Make meta in rerank API Response optional - Compatible with Opensource APIs by @ishaan-jaff in #6248
  • (testing - litellm.Router ) add unit test coverage for pattern matching / wildcard routing by @ishaan-jaff in #6250
  • (refactor) sync caching - use LLMCachingHandler class for get_cache by @ishaan-jaff in #6249
  • (refactor) - caching use separate files for each cache class by @ishaan-jaff in #6251

Full Changelog: v1.49.4...v1.49.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.5

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 250.0 286.11866113691053 6.100923329145553 0.0 1826 0 224.7632039999985 2036.4872069999365
Aggregated Passed ✅ 250.0 286.11866113691053 6.100923329145553 0.0 1826 0 224.7632039999985 2036.4872069999365

Don't miss a new litellm release

NewReleases is sending notifications on new releases.