What's Changed
- fix: handle missing finish_reason in streaming responses for LiteLLM compatibility by @fbettag in #367
- Add support for native tool calls to ChatVertexAI by @raulchedrese in #359
- Adds should_continue? optional function to mode step by @CaiqueMitsuoka in #361
- Add OpenAI Deep Research integration by @fbettag in #336
- Add
parallel_tool_callsoption toChatOpenAImodel by @martosaur in #371 - Add optional AWS session token handling in BedrockHelpers by @quangngd in #372
- fix: handle LiteLLM responses with null b64_json in OpenAIImage by @fbettag in #368
- Add Orq AI chat by @arjan in #377
- Add req_config to ChatModels.ChatOpenAI by @koszta in #376
- fix(ChatGoogleAI): Handle cumulative token usage by @mweidner037 in #373
- fix(ChatGoogleAI): Prevent error from thinking content parts by @mweidner037 in #374
- feat(ChatGoogleAI): Full thinking config by @mweidner037 in #375
- Support verbosity parameter for ChatOpenAI by @rohan-b99 in #379
- add retry_on_fallback? to chat model definition and all models by @brainlid in #350
- Prep for v0.4.o-rc.3 by @brainlid in #380
New Contributors
- @martosaur made their first contribution in #371
- @quangngd made their first contribution in #372
- @arjan made their first contribution in #377
- @koszta made their first contribution in #376
- @rohan-b99 made their first contribution in #379
Full Changelog: v0.4.0-rc.2...v0.4.0-rc.3