What's Changed
- Fix
pkg_resources
warning by @msabramo in #3602 - Update pydantic code to fix warnings by @msabramo in #3600
- Add ability to customize slack report frequency by @msabramo in #3622
- Duplicate code by @rkataria1000 in #3594
- [Feature] Add cache to disk by @antonioloison in #3266
- Logfire Integration by @elisalimli in #3444
- Ignore 0 failures and 0s latency in daily slack reports by @taralika in #3599
- feat - reset spend per team, api_key [Only Master Key] by @ishaan-jaff in #3626
- docs - use discord alerting by @ishaan-jaff in #3634
- Revert "Logfire Integration" by @krrishdholakia in #3637
- [Feat] Proxy - cancel tasks when fast api request is cancelled by @ishaan-jaff in #3640
- [Feat] Proxy + router - don't cooldown on 4XX error that are not 429, 408, 401 by @ishaan-jaff in #3651
- cloned gpt-4o models into openrouter/openai in costs&context.json by @paul-gauthier in #3647
- [Fix] - Alerting on
/completions
- don't raise hanging request alert for /completions by @ishaan-jaff in #3653 - Fix Proxy Server - only show API base, Model server log exceptions, not on client side by @ishaan-jaff in #3655
- [Fix] Revert #3600 #3600 by @ishaan-jaff in #3664
New Contributors
- @rkataria1000 made their first contribution in #3594
- @antonioloison made their first contribution in #3266
- @taralika made their first contribution in #3599
Full Changelog: v1.37.9...v1.37.10
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.10
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat