What's Changed
- perf: remove 'always_read_redis' - adding +830ms on each llm call by @krrishdholakia in #6414
- feat(litellm_logging.py): refactor standard_logging_payload function … by @krrishdholakia in #6388
- LiteLLM Minor Fixes & Improvements (10/23/2024) by @krrishdholakia in #6407
- allow configuring httpx hooks for AsyncHTTPHandler (#6290) by @krrishdholakia in #6415
- feat(proxy_server.py): check if views exist on proxy server startup +… by @krrishdholakia in #6360
- feat(litellm_pre_call_utils.py): support 'add_user_information_to_llm… by @krrishdholakia in #6390
- (admin ui) - show created_at for virtual keys by @ishaan-jaff in #6429
- (feat) track created_at, updated_at for virtual keys by @ishaan-jaff in #6428
- Code cov - add checks for patch and overall repo by @ishaan-jaff in #6436
- (admin ui / auth fix) Allow internal user to call /key/{token}/regenerate by @ishaan-jaff in #6430
- LiteLLM Minor Fixes & Improvements (10/24/2024) by @krrishdholakia in #6421
- (proxy audit logs) fix serialization error on audit logs by @ishaan-jaff in #6433
Full Changelog: v1.50.4...v1.51.0-stable
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.0-stable
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 220.0 | 259.348547705819 | 6.147561516829862 | 0.0 | 1839 | 0 | 207.74116500001583 | 1588.2848330000456 |
Aggregated | Passed ✅ | 220.0 | 259.348547705819 | 6.147561516829862 | 0.0 | 1839 | 0 | 207.74116500001583 | 1588.2848330000456 |