What's Changed
- fix(utils.py): return openai streaming prompt caching tokens by @krrishdholakia in #6051
- (fixes) gcs bucket key based logging by @ishaan-jaff in #6044
- (fix prometheus) track cooldown events for llm deployments by @ishaan-jaff in #6060
- (docs) add 1k rps load test doc by @ishaan-jaff in #6059
- (fixes) docs + qa - gcs key based logging by @ishaan-jaff in #6061
Full Changelog: v1.48.12...v1.48.14-stable
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.14-stable
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 140.0 | 157.70601028721856 | 6.354251502736087 | 0.0 | 1901 | 0 | 109.31232299998328 | 1592.3886319999951 |
Aggregated | Passed ✅ | 140.0 | 157.70601028721856 | 6.354251502736087 | 0.0 | 1901 | 0 | 109.31232299998328 | 1592.3886319999951 |