github deepset-ai/haystack v2.10.0-rc1

pre-release17 hours ago

Release Notes

v2.10.0-rc1

Highlights

We are introducing the `AsyncPipeline`: Supports running pipelines asynchronously. Schedules components concurrently whenever possible. Leads to major speed improvements for any pipelines that may run workloads in parallel.

Major refactoring of Pipeline.run() to fix multiple bugs. We moved from a mostly graph-based to a dynamic dataflow driven execution logic. While most pipelines should remain unaffected, we recommend carefully checking your pipeline executions to ensure their output hasn't changed.

Upgrade Notes

  • The DOCXToDocument converter now returns a Document object with DOCX metadata stored in the meta field as a dictionary under the key docx. Previously, the metadata was represented as a DOCXMetadata dataclass. This change does not impact reading from or writing to a Document Store.
  • Removed the deprecated NLTKDocumentSplitter, it's functionalities are now supported by the DocumentSplitter.
  • The deprecated FUNCTION role has been removed from the ChatRole enum. Use TOOL instead. The deprecated class method ChatMessage.from_function has been removed. Use ChatMessage.from_tool instead.

New Features

  • Added a new component ListJoiner which joins lists of values from different components to a single list.

  • Introduced the OpenAPIConnector component, enabling direct invocation of REST endpoints as specified in an OpenAPI specification. This component is designed for direct REST endpoint invocation without LLM-generated payloads, users needs to pass the run parameters explicitly.

    Example: `python from haystack.utils import Secret from haystack.components.connectors.openapi import OpenAPIConnector connector = OpenAPIConnector( openapi_spec="https://bit.ly/serperdev_openapi", credentials=Secret.from_env_var("SERPERDEV_API_KEY"), ) response = connector.run( operation_id="search", parameters={"q": "Who was Nikola Tesla?"} )`

  • Adding a new component LLMMetadaExtractor which can be used in an indexing pipeline to extract metadata from documents based on a user given prompt, and return the documents with the metadata field with the output of the LLM.

  • Add support for Tools in the Azure OpenAI Chat Generator.

  • Introduced CSVDocumentCleaner component for cleaning CSV documents.

    • Removes empty rows and columns, while preserving specified ignored rows and columns.
    • Customizable number of rows and columns to ignore during processing.
  • Introducing CSVDocumentSplitter: The CSVDocumentSplitter splits CSV documents into structured sub-tables by recursively splitting by empty rows and columns larger than a specified threshold. This is particularly useful when converting Excel files which can often have multiple tables within one sheet.

  • Drawing pipelines, i.e.: calls to draw() or show(), can now be done using a custom Mermaid server and additional parameters. This allows for more flexibility in how pipelines are rendered. See Mermaid.ink's [documentation](https://github.com/jihchi/mermaid.ink) for more information on how to set up a custom server.

  • Added a new AsyncPipeline implementation that allows pipelines to be executed from async code, supporting concurrent scheduling of pipeline components for faster processing.

  • Adds tooling support to HuggingFaceLocalChatGenerator

Enhancement Notes

  • Enhanced SentenceTransformersDocumentEmbedder and SentenceTransformersTextEmbedder to accept an additional parameter, which is passed directly to the underlying SentenceTransformer.encode method for greater flexibility in embedding customization.
  • Added completion_start_time metadata to track time-to-first-token (TTFT) in streaming responses from Hugging Face API and OpenAI (Azure).
  • Enhancements to Date Filtering in MetadataRouter
    • Improved date parsing in filter utilities by introducing _parse_date, which first attempts datetime.fromisoformat(value) for backward compatibility and then falls back to dateutil.parser.parse() for broader ISO 8601 support.
    • Resolved a common issue where comparing naive and timezone-aware datetimes resulted in TypeError. Added _ensure_both_dates_naive_or_aware, which ensures both datetimes are either naive or aware. If one is missing a timezone, it is assigned the timezone of the other for consistency.
  • When Pipeline.from_dict receives an invalid type (e.g. empty string), an informative PipelineError is now raised.
  • Add jsonschema library as a core dependency. It is used in Tool and JsonSchemaValidator.
  • Streaming callback run param support for HF chat generators.
  • For the CSVDocumentCleaner, added remove_empty_rows & remove_empty_columns to optionally remove rows and columns. Also added keep_id to optionally allow for keeping the original document ID.
  • Enhanced OpenAPIServiceConnector to support and be compatible with the new ChatMessage format.
  • Updated Document's meta data after initializing the Document in DocumentSplitter as requested in issue #8741

Deprecation Notes

  • The ExtractedTableAnswer dataclass and the dataframe field in the Document dataclass are deprecated and will be removed in Haystack 2.11.0. Check out the GitHub discussion for motivation and details: #8688

Bug Fixes

  • Fixes a bug that causes pyright type checker to fail for all component objects.
  • Haystack pipelines with Mermaid graphs are now compressed to reduce the size of the encoded base64 and avoid HTTP 400 errors when the graph is too large.
  • The DOCXToDocument component now skips comment blocks in DOCX files that previously caused errors.
  • Callable deserialization now works for all fully qualified import paths.
  • Fix error messages for Document Classifier components, that suggested using nonexistent components for text classification.
  • Fixed JSONConverter to properly skip converting JSON files that are not utf-8 encoded.
    • acyclic pipelines with multiple lazy variadic components not running all components
    • cyclic pipelines not passing intermediate outputs to components outside the cycle
    • cyclic pipelines with two or more optional or greedy variadic edges showing unexpected execution behavior
    • cyclic pipelines with two cycles sharing an edge raising errors
  • Updated PDFMinerToDocument convert function to to double new lines between container_text so that passages can later by DocumentSplitter.
  • In the Hugging Face API embedders, the InferenceClient.feature_extraction method is now used instead of InferenceClient.post to compute embeddings. This ensures a more robust and future-proof implementation.
  • Improved OpenAIChatGenerator streaming response tool call processing: The logic now scans all chunks to correctly identify the first chunk with tool calls, ensuring accurate payload construction and preventing errors when tool call data isn’t confined to the initial chunk.

Don't miss a new haystack release

NewReleases is sending notifications on new releases.