🚨🚨 Minor Updates Made to DB Schema on this release
🚨 Change to LiteLLM Helm - remove run_gunicorn
from default HELM chart. This is to be compliant with our best practices https://docs.litellm.ai/docs/proxy/prod
What's Changed
- [Feat] add failure callbacks from DB to proxy by @ishaan-jaff in #3775
- [Fix] - don't use
gunicorn
on litellm helm by @ishaan-jaff in #3783 - [Feat] LiteLLM Proxy: Enforce End-User TPM, RPM Limits by @ishaan-jaff in #3785
- feat(schema.prisma): store model id + model group as part of spend logs allows precise model metrics by @krrishdholakia in #3789
- feat(proxy_server.py): enable admin to set tpm/rpm limits for end-users via UI by @krrishdholakia in #3787
- [Feat] Set Budgets for Users within a Team by @ishaan-jaff in #3790
Full Changelog: v1.37.20...v1.38.0
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.38.0
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat