What's Changed
- fix: use per-token costs for claude via vertex_ai by @spdustin in #4337
- [Feat] Admin UI - Show Cache hit stats by @ishaan-jaff in #4340
- fix - liteLLM proxy /moderations endpoint returns 500 error when model is not specified by @ishaan-jaff in #4342
- [Fix + Test] - Spend tags not getting stored on 1.40.9 by @ishaan-jaff in #4345
- Print content window fallbacks on startup to help verify configuration by @lolsborn in #4350
New Contributors
Full Changelog: v1.40.21...v1.40.22
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.40.22
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 120.0 | 147.06905652027004 | 6.431081863451109 | 0.0 | 1924 | 0 | 100.04098199999589 | 1834.3141159999732 |
Aggregated | Passed ✅ | 120.0 | 147.06905652027004 | 6.431081863451109 | 0.0 | 1924 | 0 | 100.04098199999589 | 1834.3141159999732 |