What's Changed
- Merge: #5815- feat(vertex): Use correct provider for response_schema support check by @krrishdholakia in #5829
- [Feat-Router] Allow setting which environment to use a model on by @ishaan-jaff in #5892
- [Feat] Improve OTEL Tracking - Require all Redis Cache reads to be logged on OTEL by @ishaan-jaff in #5881
- [Proxy-Docs] service accounts by @ishaan-jaff in #5900
- [Feat] add fireworks llama 3.2 models + cost tracking by @ishaan-jaff in #5905
- Add gemini-1.5-pro-002 and gemini-1.5-flash-002 by @ushuz in #5879
- [Perf improvement Proxy] Use Dual Cache for getting key and team objects by @ishaan-jaff in #5903
Full Changelog: v1.48.1...v1.48.2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.2
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 140.0 | 165.74542403631537 | 6.348038111767745 | 0.0 | 1900 | 0 | 118.05140799998526 | 1503.6122989999967 |
Aggregated | Passed ✅ | 140.0 | 165.74542403631537 | 6.348038111767745 | 0.0 | 1900 | 0 | 118.05140799998526 | 1503.6122989999967 |