What's Changed
- feat(router.py): add flag for mock testing loadbalancing for rate limit errors by @krrishdholakia in #5036
- Handle bedrock tool calling in stream_chunk_builder by @jcheng5 in #5025
- feat(anthropic_adapter.py): support streaming requests for
/v1/messages
endpoint by @krrishdholakia in #5040 - [Feat-Proxy] Log request/response on GCS by @ishaan-jaff in #5047
- [Proxy-Fix] Requests that are incorrectly flagged as admin-only paths by @ishaan-jaff in #5050
- [FIX] allow setting UI BASE path by @ishaan-jaff in #4142
- Revert "[FIX] allow setting UI BASE path" by @ishaan-jaff in #5054
- [Proxy Fix] Allow running UI on custom path by @ishaan-jaff in #5056
- feat(caching.py): enable caching on provider-specific optional params by @krrishdholakia in #5051
- OTEL - Log DB queries / functions on OTEL by @ishaan-jaff in #5059
- Fix - add debug statements when connecting to prisma DB by @ishaan-jaff in #5058
- fix: bump default allowed_fails + reduce default db pool limit by @krrishdholakia in #5052
New Contributors
Full Changelog: v1.42.12...v1.43.0
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.43.0
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 85 | 100.15133779102914 | 6.555354782147974 | 0.0 | 1962 | 0 | 67.9489269999749 | 1520.4776349999634 |
Aggregated | Passed ✅ | 85 | 100.15133779102914 | 6.555354782147974 | 0.0 | 1962 | 0 | 67.9489269999749 | 1520.4776349999634 |