What's Changed
- Nova Canvas complete image generation tasks (#9177) by @krrishdholakia in #9525
- [Feature]: Support for Fine-Tuned Vertex AI LLMs by @ishaan-jaff in #9542
- feat(prisma-migrations): add baseline db migration file by @krrishdholakia in #9565
- Add Daily User Spend Aggregate view - allows UI Usage tab to work > 1m rows by @krrishdholakia in #9538
- Support Gemini audio token cost tracking + fix openai audio input token cost tracking by @krrishdholakia in #9535
- [Reliability Fixes] - Gracefully handle exceptions when DB is having an outage by @ishaan-jaff in #9533
- [Reliability Fix] - Allow Pods to startup + passing /health/readiness when
allow_requests_on_db_unavailable: True
and DB is down by @ishaan-jaff in #9569 - Add OpenAI gpt-4o-transcribe support by @krrishdholakia in #9517
- Allow viewing keyinfo on request logs by @krrishdholakia in #9568
- Allow team admins to add/update/delete models on UI + show api base and model id on request logs by @krrishdholakia in #9572
- Litellm fix db testing by @krrishdholakia in #9593
- Litellm new UI build by @krrishdholakia in #9601
- Support max_completion_tokens on Mistral by @Cmancuso in #9589
- Revert "Support max_completion_tokens on Mistral" by @krrishdholakia in #9604
- fix(mistral_chat_transformation.py): add missing comma by @krrishdholakia in #9606
- Support discovering gemini, anthropic, xai models by calling their
/v1/model
endpoint by @krrishdholakia in #9530 - Connect UI to "LiteLLM_DailyUserSpend" spend table - enables usage tab to work at 1m+ spend logs by @krrishdholakia in #9603
- Update README.md by @krrishdholakia in #9616
- fix(proxy_server.py): get master key from environment, if not set in … by @krrishdholakia in #9617
New Contributors
Full Changelog: v1.64.1-nightly...v1.65.0.rc
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.65.0.rc
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 540.0 | 624.3702232376279 | 5.474375767460597 | 0.0 | 1637 | 0 | 487.16235000006236 | 2776.3750889999983 |
Aggregated | Failed ❌ | 540.0 | 624.3702232376279 | 5.474375767460597 | 0.0 | 1637 | 0 | 487.16235000006236 | 2776.3750889999983 |