What's Changed
- (fix) GCS bucket logger - apply truncate_standard_logging_payload_content to standard_logging_payload and ensure GCS flushes queue on fails by @ishaan-jaff in #7519
- (Fix) - Hashicorp secret manager - don't print hcorp secrets in debug logs by @ishaan-jaff in #7529
- [Bug-Fix]: None metadata not handled for
_PROXY_VirtualKeyModelMaxBudgetLimiter
hook by @ishaan-jaff in #7523 - Bump anthropic.claude-3-5-haiku-20241022-v1:0 to new limits by @Manouchehri in #7118
- Fix langfuse prompt management on proxy by @krrishdholakia in #7535
- (Feat) - Hashicorp secret manager, use TLS cert authentication by @ishaan-jaff in #7532
- Fix OTEL message redaction + Langfuse key leak in logs by @krrishdholakia in #7516
- feat: implement support for limit, order, before, and after parameters in get_assistants by @jeansouzak in #7537
- Add missing prefix for deepseek by @SmartManoj in #7508
- (fix)
aiohttp_openai/
route - get to 1K RPS on single instance by @ishaan-jaff in #7539
New Contributors
- @jeansouzak made their first contribution in #7537
- @SmartManoj made their first contribution in #7508
Full Changelog: v1.56.8...v1.56.8-dev2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.56.8-dev2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 260.0 | 302.69986428167584 | 6.1480113905567375 | 0.0 | 1839 | 0 | 230.89517400001114 | 2985.9468520000405 |
Aggregated | Failed ❌ | 260.0 | 302.69986428167584 | 6.1480113905567375 | 0.0 | 1839 | 0 | 230.89517400001114 | 2985.9468520000405 |