What's Changed since v0.3.3
- Add OpenAI and Claude thinking support - v0.4.0-rc.0 by @brainlid in #297
- vertex ai file url support by @ahsandar in #296
- Update docs for Vertex AI by @ahsandar in #304
- Fix ContentPart migration by @mathieuripert in #309
- Fix tests for content_part_for_api/2 of ChatOpenAI in v0.4.0-rc0 by @nallwhy in #300
- Fix
tool_callsnilmessages by @udoschneider in #314 - feat: Add structured output support to ChatMistralAI by @mathieuripert in #312
- feat: add configurable tokenizer to text splitters by @mathieuripert in #310
- simple formatting issue by @Bodhert in #307
- Update Message.new_system spec to accurately accept [ContentPart.t()]… by @rtorresware in #315
- Fix: Add token usage to ChatGoogleAI message metadata by @mathieuripert in #316
- feat: include raw API responses in LLM error objects for better debug… by @TwistingTwists in #317
- expanded docs and test coverage for prompt caching by @brainlid in #325
- Fix AWS Bedrock stream decoder ordering issue by @stevehodgkiss in #327
- significant updates for v0.4.0-rc.1 by @brainlid in #328
- filter out empty lists in message responses by @brainlid in #333
- fix: Require gettext ~> 0.26 by @mweidner037 in #332
- Add
retry: transientto Req for Anthropic models in stream mode by @jonator in #329 - fixed issue with poorly matching list in case by @brainlid in #334
- feat: Add organization ID as a parameter by @hjemmel in #337
- Add missing verbose_api field to ChatOllamaAI for streaming compatibility by @gur-xyz in #341
- Added usage data to the VertexAI Message response. by @raulchedrese in #335
- feat: add run mode: step by @CaiqueMitsuoka in #343
- feat: add support for multiple tools in run_until_tool_used by @fortmarek in #345
- Fix ChatOllamaAI stop sequences: change from string to array type by @gur-xyz in #342
- expanded logging for ChatAnthropic API errors by @brainlid in #349
- Prevent crash when ToolResult with string in ChatGoogleAI.for_api/1 by @nallwhy in #352
- Bedrock OpenAI-compatible API compatibility fix by @stevehodgkiss in #356
- added xAI Grok chat model support by @alexfilatov in #338
- Support thinking to ChatGoogleAI by @nallwhy in #354
- Add req_config to ChatMode.ChatGoogleAI by @nallwhy in #357
- Clean up treating MessageDelta in ChatModels.ChatGoogleAI by @nallwhy in #353
- Expose full response headers through a new on_llm_response_headers callback by @brainlid in #358
- only include "user" with OpenAI request when a value is provided by @brainlid in #364
- Handle no content parts responses in ChatGoogleAI by @nallwhy in #365
- Adds support for gpt-image-1 in LangChain.Images.OpenAIImage by @Ven109 in #360
- Pref for release v0.4.0-rc.2 by @brainlid in #366
- fix: handle missing finish_reason in streaming responses for LiteLLM compatibility by @fbettag in #367
- Add support for native tool calls to ChatVertexAI by @raulchedrese in #359
- Adds should_continue? optional function to mode step by @CaiqueMitsuoka in #361
- Add OpenAI Deep Research integration by @fbettag in #336
- Add
parallel_tool_callsoption toChatOpenAImodel by @martosaur in #371 - Add optional AWS session token handling in BedrockHelpers by @quangngd in #372
- fix: handle LiteLLM responses with null b64_json in OpenAIImage by @fbettag in #368
- Add Orq AI chat by @arjan in #377
- Add req_config to ChatModels.ChatOpenAI by @koszta in #376
- fix(ChatGoogleAI): Handle cumulative token usage by @mweidner037 in #373
- fix(ChatGoogleAI): Prevent error from thinking content parts by @mweidner037 in #374
- feat(ChatGoogleAI): Full thinking config by @mweidner037 in #375
- Support verbosity parameter for ChatOpenAI by @rohan-b99 in #379
- add retry_on_fallback? to chat model definition and all models by @brainlid in #350
- Prep for v0.4.o-rc.3 by @brainlid in #380
- Use moduledoc instead of doc for LLMChain documentation by @xxdavid in #384
- Support OTP 28 in CI by @kianmeng in #382
- OpenAI responses by @vasspilka in #381
- Add AGENTS.md and CLAUDE.md file support by @brainlid in #385
- Suppress the compiler warning messages for ChatBumblebee by @brainlid in #386
- fix: Support for json-schema in OpenAI responses API by @vasspilka in #387
- Prepare for v0.4.0 release by @brainlid in #388
New Contributors
- @ahsandar made their first contribution in #296
- @mathieuripert made their first contribution in #309
- @udoschneider made their first contribution in #314
- @Bodhert made their first contribution in #307
- @rtorresware made their first contribution in #315
- @TwistingTwists made their first contribution in #317
- @mweidner037 made their first contribution in #332
- @jonator made their first contribution in #329
- @hjemmel made their first contribution in #337
- @gur-xyz made their first contribution in #341
- @CaiqueMitsuoka made their first contribution in #343
- @fortmarek made their first contribution in #345
- @alexfilatov made their first contribution in #338
- @Ven109 made their first contribution in #360
- @martosaur made their first contribution in #371
- @quangngd made their first contribution in #372
- @arjan made their first contribution in #377
- @koszta made their first contribution in #376
- @rohan-b99 made their first contribution in #379
- @xxdavid made their first contribution in #384
Full Changelog: v0.3.3...v0.4.0