What's Changed
- (feat) add azure o1 models to model cost map by @ishaan-jaff in #6075
- (feat) add cost tracking for OpenAI prompt caching by @ishaan-jaff in #6055
- (docs) add links / sections for router settings, general settings on proxy config.yaml by @ishaan-jaff in #6078
- (feat) add azure openai cost tracking for prompt caching by @ishaan-jaff in #6077
- openrouter/openai's litellm_provider should be openrouter, not openai by @GTonehour in #6079
- (code clean up) use a folder for gcs bucket logging + add readme in folder by @ishaan-jaff in #6080
New Contributors
- @GTonehour made their first contribution in #6079
Full Changelog: v1.48.15...v1.48.16
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.16
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 150.0 | 178.4167077446905 | 6.294411321333372 | 0.0 | 1884 | 0 | 124.05050000006668 | 1785.5170410000483 |
Aggregated | Passed ✅ | 150.0 | 178.4167077446905 | 6.294411321333372 | 0.0 | 1884 | 0 | 124.05050000006668 | 1785.5170410000483 |