github BerriAI/litellm v1.42.9.dev1

latest releases: v1.52.5, v1.52.4, v1.52.0-stable...
3 months ago

What's Changed

  • fix: support vertex filepath on proxy litellm_params definition by @ec2ainun in #4989
  • [Fix-OTEL Proxy] Only forward traceparent to llm api when litellm.forward_traceparent_to_llm_provider=True by @ishaan-jaff in #4995
  • [Fix-Proxy] Log attributes on failed LLM calls by @ishaan-jaff in #4997
  • [Enterprise Proxy Feature] - Log to GCS Bucket ✨⚡️ by @ishaan-jaff in #4999
  • Update helm chart to 0.2.2 by @lowjiansheng in #4992
  • Add databricks/databricks-meta-llama-3-1-70b-instruct, 'databricks/databricks-meta-llama-3-1-405b-instruct' by @ishaan-jaff in #5003
  • Add new model for gemini-1.5-pro-exp-0801. by @Manouchehri in #5002
  • feat(vertex_ai_partner.py): add vertex ai codestral FIM support by @krrishdholakia in #5004

New Contributors

Full Changelog: v1.42.8...v1.42.9.dev1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.9.dev1

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.42.9.dev1

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 130.0 151.23181146121527 6.377240371349092 0.0 1908 0 100.50357199997961 2558.7362329999905
Aggregated Passed ✅ 130.0 151.23181146121527 6.377240371349092 0.0 1908 0 100.50357199997961 2558.7362329999905

Don't miss a new litellm release

NewReleases is sending notifications on new releases.