github BerriAI/litellm v1.50.4

latest release: v1.50.4-stable
one day ago

What's Changed

  • (feat) Arize - Allow using Arize HTTP endpoint by @ishaan-jaff in #6364
  • LiteLLM Minor Fixes & Improvements (10/22/2024) by @krrishdholakia in #6384
  • build(deps): bump http-proxy-middleware from 2.0.6 to 2.0.7 in /docs/my-website by @dependabot in #6395
  • (docs + testing) Correctly document the timeout value used by litellm proxy is 6000 seconds + add to best practices for prod by @ishaan-jaff in #6339
  • (refactor) move convert dict to model response to llm_response_utils/ by @ishaan-jaff in #6393
  • (refactor) litellm.Router client initialization utils by @ishaan-jaff in #6394
  • (fix) Langfuse key based logging by @ishaan-jaff in #6372
  • Revert "(refactor) litellm.Router client initialization utils " by @ishaan-jaff in #6403
  • (fix) using /completions with echo by @ishaan-jaff in #6401
  • (refactor) prometheus async_log_success_event to be under 100 LOC by @ishaan-jaff in #6416
  • (refactor) router - use static methods for client init utils by @ishaan-jaff in #6420
  • (code cleanup) remove unused and undocumented logging integrations - litedebugger, berrispend by @ishaan-jaff in #6406

Full Changelog: v1.50.2...v1.50.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.50.4

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed ❌ 280.0 312.6482922531862 6.037218908394318 0.0 1805 0 231.8999450000092 2847.2051709999846
Aggregated Failed ❌ 280.0 312.6482922531862 6.037218908394318 0.0 1805 0 231.8999450000092 2847.2051709999846

Don't miss a new litellm release

NewReleases is sending notifications on new releases.