What's Changed
- Bedrock Embeddings refactor + model support by @krrishdholakia in #5462
- Fix response_format={'type': 'json_object'} not working for Azure models by @simonsanvil in #5468
- LiteLLM minor fixes + improvements (31/08/2024) by @krrishdholakia in #5464
- (gemini): Fix Cloudflare AI Gateway typo. by @Manouchehri in #5429
- [PRICING] Add pricing for ft:gpt-3.5-turbo-* by @kiriloman in #5471
- Azure Service Principal with Secret authentication workflow. (#5131) by @krrishdholakia in #5437
- LiteLLM Minor Fixes + Improvements by @krrishdholakia in #5474
- [Feat] Add AI21 /chat API by @ishaan-jaff in #5478
- [Feat] Track Usage for
/streamGenerateContent
endpoint by @ishaan-jaff in #5480 - [Feat-Proxy] track imagen /predict in LiteLLM spend logs by @ishaan-jaff in #5481
- [Feat] track embedding /predict in spend logs by @ishaan-jaff in #5482
- feat(router.py): Support Loadbalancing batch azure api endpoints by @krrishdholakia in #5469
- fix(router.py): fix inherited type by @krrishdholakia in #5485
Full Changelog: v1.44.14...v1.44.15
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.44.15
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 150.0 | 172.03211431410688 | 6.371131108857087 | 0.0 | 1907 | 0 | 115.2671469999973 | 2482.183120000059 |
Aggregated | Passed ✅ | 150.0 | 172.03211431410688 | 6.371131108857087 | 0.0 | 1907 | 0 | 115.2671469999973 | 2482.183120000059 |