github deepset-ai/haystack v2.27.0-rc1

pre-release7 hours ago

⭐️ Highlights

🔌 Automatic List Joining in Pipeline

When a component expects a list as input, pipelines now automatically join multiple inputs into that list (no extra components needed), even if they come in different but compatible types. This enables patterns like combining a plain query string with a list of ChatMessage objects into a single list[ChatMessage] input.

Supported conversations:

Source Types Target Type Behavior
T + T list[T] Combines multiple inputs into a list of the same type.
T + list[T] list[T] Merges single items and lists into a single list.
str + ChatMessage list[str] Converts all inputs to str and combines into a list.
str + ChatMessage list[ChatMessage] Converts all inputs to ChatMessage and combines into a list.

Learn more about how to simplify list joins in pipelines in 📖 Smart Pipeline Connections: Implicit List Joining

🗄️ Better Developer Experience for DocumentStores

The metadata inspection and filtering utilities (count_documents_by_filter, count_unique_metadata_by_filter, get_metadata_field_min_max, etc.) are now available in the InMemoryDocumentStore, aligning it with other document stores.

You can prototype locally in memory and easily debug, filter, and inspect the data in the document store during development, then reuse the same logic in production. See all available methods in InMemoryDocumentStore API reference.

🚀 New Features

  • Added new operations to the InMemoryDocumentStore: count_documents_by_filter, count_unique_metadata_by_filter, get_metadata_fields_info, get_metadata_field_min_max, get_metadata_field_unique_values

  • AzureOpenAIChatGenerator now exposes a SUPPORTED_MODELS class variable listing supported model IDs, for example gpt-5-mini and gpt-4o. To view all supported models go to the API reference or run:

    from haystack.components.generators.chat import AzureOpenAIChatGenerator
    print(AzureOpenAIChatGenerator.SUPPORTED_MODELS)

    We will add this for other model providers in their respective ChatGenerator components step by step.

  • Added partial support for the image-text-to-text task in HuggingFaceLocalChatGenerator.

    This allows the use of multimodal models like Qwen 3.5 or Ministral with text-only inputs. Complete multimodal support via Hugging Face Transformers might be addressed in the future.

  • Added async filter helpers to the InMemoryDocumentStore: update_by_filter_async(), count_documents_by_filter_async(), and count_unique_metadata_by_filter_async().

⚡️ Enhancement Notes

  • Add async variants of metadata methods to InMemoryDocumentStore: get_metadata_fields_info_async(), get_metadata_field_min_max_async(), and get_metadata_field_unique_values_async(). These rely on the store's thread-pool executor, consistent with the existing async method pattern.
  • Add _to_trace_dict method to ImageContent and FileContent dataclasses. When tracing is enabled, the large base64-encoded binary fields (base64_image and base64_data) are replaced with placeholder strings (e.g. "Base64 string (N characters)"), consistent with the behavior of ByteStream._to_trace_dict.
  • Pipelines now support auto-variadic connections with type conversion. When multiple senders are connected to a single list-typed input socket, the senders no longer need to all produce the exact same type since compatible conversions are applied per edge. Supported scenarios include T + T -> list[T], T + list[T] -> list[T], str + ChatMessage -> list[str], str + ChatMessage -> list[ChatMessage], and all other str <-> ChatMessage conversion variants. This enables pipeline patterns like joining a plain query string with a list of ChatMessage objects into a single list[ChatMessage] input without any extra components.

🔒 Security Notes

  • Fixed an issue in ChatPromptBuilder where specially crafted template variables could be interpreted as structured content (e.g., images, tool calls) instead of plain text.

    Template variables are now automatically sanitized during rendering, ensuring they are always treated as plain text.

🐛 Bug Fixes

  • Fix malformed log format string in DocumentCleaner. The warning for documents with None content used %{document_id} instead of {document_id}, preventing proper interpolation of the document ID.
  • Fix ToolInvoker._merge_tool_outputs silently appending None to list-typed state when a tool's outputs_to_state source key is absent from the tool result. This is a common scenario with PipelineTool wrapping a pipeline that has conditional branches where not all outputs are always produced even if defined in outputs_to_state. The mapping is now skipped entirely when the source key is not present in the result dict.
  • Fixed an off-by-one error in InMemoryDocumentStore.write_documents that caused the BM25 average document length to be systematically underestimated.
  • Resolve $defs/$ref in tool parameter schemas before sending them to the HuggingFace API. The HuggingFace API does not support JSON Schema $defs references, which are generated by Pydantic when tool parameters contain dataclass types. This fix inlines all $ref pointers and removes the $defs section from tool schemas in HuggingFaceAPIChatGenerator.
  • The default bm25_tokenization_regex in InMemoryDocumentStore now uses r"(?u)\b\w+\b", including single-character words (e.g., "a", "C") in BM25 scoring. Previously, the regex r"(?u)\b\w\w+\b" excluded these tokens. This change may slightly alter retrieval results. To restore the old behavior, explicitly pass the previous regex when initializing the document store.

💙 Big thank you to everyone who contributed to this release!

@aayushbaluni, @anakin87, @bilgeyucel, @bogdankostic, @Br1an67, @ComeOnOliver, @davidsbatista, @jnMetaCode, @julian-risch, @Krishnachaitanyakc, @maxdswain, @pandego, @RMartinWhozfoxy, @satishkc7, @sjrl, @srini047, @SyedShahmeerAli12, @v-tan, @xr843

Don't miss a new haystack release

NewReleases is sending notifications on new releases.