What's Changed
- build(model_prices_and_context_window.json): add gpt-4.1 pricing by @krrishdholakia in #9990
- [Fixes/QA] For gpt-4.1 costs by @ishaan-jaff in #9991
- Fix cost for Phi-4-multimodal output tokens by @emerzon in #9880
- chore(docs): update ordering of logging & observability docs by @marcklingen in #9994
- Updated cohere v2 passthrough by @krrishdholakia in #9997
- [Feat] Add support for
cache_control_injection_points
for Anthropic API, Bedrock API by @ishaan-jaff in #9996 - [UI] Allow setting prompt
cache_control_injection_points
by @ishaan-jaff in #10000
Full Changelog: v1.66.0-nightly...v1.66.1-nightly
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.66.1-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 220.0 | 243.74385918230334 | 6.268015361621096 | 0.0 | 1876 | 0 | 197.45038600001408 | 3855.600032000012 |
Aggregated | Passed ✅ | 220.0 | 243.74385918230334 | 6.268015361621096 | 0.0 | 1876 | 0 | 197.45038600001408 | 3855.600032000012 |