github BerriAI/litellm v1.37.13

latest releases: v1.41.7, v1.41.6.dev1, v1.41.6...
one month ago

What's Changed

  • [Fix]- router/proxy show better client side errors when no_healthy deployments available by @ishaan-jaff in #3679
  • [Fix] Flush langfuse logs on proxy shutdown by @ishaan-jaff in #3681
  • Allow non-admins to use /engines/{model}/chat/completions by @msabramo in #3663
  • Fix datetime.datetime.utcnow DeprecationWarning by @msabramo in #3686
  • [Fix] - include model name in cool down alerts by @ishaan-jaff in #3690
  • feat(lago.py): Enable Usage-based billing with lago by @krrishdholakia in #3685
  • [UI] End User Spend - Fix Timezone diff bug by @ishaan-jaff in #3692
  • [Feat] token_counter endpoint by @ishaan-jaff in #3682
  • Timeout param: custom_llm_provider needs to be set before setting timeout by @edwinjosegeorge in #3645
  • [Fix] AI Studio (Gemini API) returns invalid 1 index instead of 0 when "stream": false by @ishaan-jaff in #3693
  • fix(proxy_server.py): check + get end-user obj even for master key calls by @krrishdholakia in #3575
  • [Feat] Support Anthropic tools-2024-05-16 - Set Custom Anthropic Custom Headers by @ishaan-jaff in #3694
  • [Feat] Admin UI - show model prices as Per 1M tokens by @ishaan-jaff in #3696
  • Add commented set_verbose line to proxy_config by @msabramo in #3699
  • [Fix] Polish Models Page - set max width per column, fix bug with selecting models by @ishaan-jaff in #3698
  • [UI] Fix Round Team Spend, and Show Key Alias on Top API Keys by @ishaan-jaff in #3700
  • [Fix] allow users to opt into specific alert types + Introduce spend_report alert type by @ishaan-jaff in #3702
  • fix(replicate.py): move replicate calls to being async by @krrishdholakia in #3704
  • [FEAT] add cost tracking for Fine Tuned OpenAI ft:davinci-002 and ft:babbage-002 by @ishaan-jaff in #3705
  • Exclude custom headers from response if the value is None or empty string by @paneru-rajan in #3701
  • Fix(router.py): Kill a bug that forced Azure OpenAI to have an API ke… by @Manouchehri in #3706

https://docs.litellm.ai/docs/providers/anthropic#forcing-anthropic-tool-use
codeimage-snippet_17 (2)

New Contributors

Full Changelog: v1.37.12...v1.37.13

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.13

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Don't miss a new litellm release

NewReleases is sending notifications on new releases.