What's Changed
- [FIX] Proxy - Set different locations per vertex ai deployment on litellm proxy by @ishaan-jaff in #2234
- fix(proxy_server.py): introduces a beta endpoint for admin to view global spend by @krrishdholakia in #2236
- [FEAT] Track which models support function calling by @ishaan-jaff in #2241
- [FIX] Race Condition with Custom Callbacks where Async Streaming got triggered twice by @ishaan-jaff in #2240
- [WIP] Allow proxy admin to add others to view global spend by @krrishdholakia in #2231
- 👉 Support for Mistral AI Tool Calling Live now https://docs.litellm.ai/docs/providers/mistral
Check if a model supports function calling, parallel function calling https://docs.litellm.ai/docs/completion/function_call
Full Changelog: v1.27.14...v1.27.15