github deepset-ai/haystack v2.2.0-rc2

latest releases: v1.26.3-rc1, v2.2.3, v2.2.2...
pre-releaseone month ago

Release Notes

v2.2.0-rc1

Highlights

The Multiplexer component proved to be hard to explain and to understand. After reviewing its use cases, the documentation was rewritten and the component was renamed to BranchJoiner to better explain its functionalities.

Add the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' to the OpenAI components.

⬆️ Upgrade Notes

  • BranchJoiner has the very same interface as Multiplexer. To upgrade your code, just rename any occurrence of Multiplexer to BranchJoiner and ajdust the imports accordingly.

🚀 New Features

  • Add BranchJoiner to eventually replace Multiplexer
  • AzureOpenAIGenerator and AzureOpenAIChatGenerator can now be configured passing a timeout for the underlying AzureOpenAI client.

⚡️ Enhancement Notes

  • ChatPromptBuilder now supports changing its template at runtime. This allows you to define a default template and then change it based on your needs at runtime.
  • If an LLM-based evaluator (e.g., Faithfulness or ContextRelevance) is initialised with raise_on_failure=False, and if a call to an LLM fails or an LLM outputs an invalid JSON, the score of the sample is set to NaN instead of raising an exception. The user is notified with a warning indicating the number of requests that failed.
  • Adds inference mode to model call of the ExtractiveReader. This prevents gradients from being calculated during inference time in pytorch.
  • The DocumentCleaner class has the optional attribute keep_id that if set to True it keeps the document ids unchanged after cleanup.
  • DocumentSplitter now has an optional split_threshold parameter. Use this parameter if you want to rather not split inputs that are only slightly longer than the allowed split_length. If when chunking one of the chunks is smaller than the split_threshold, the chunk will be concatenated with the previous one. This avoids having too small chunks that are not meaningful.
  • Re-implement InMemoryDocumentStore BM25 search with incremental indexing by avoiding re-creating the entire inverse index for every new query. This change also removes the dependency on haystack_bm25. Please refer to [PR #7549](#7549) for the full context.
  • Improved MIME type management by directly setting MIME types on ByteStreams, enhancing the overall handling and routing of different file types. This update makes MIME type data more consistently accessible and simplifies the process of working with various document formats.
  • PromptBuilder now supports changing its template at runtime (e.g. for Prompt Engineering). This allows you to define a default template and then change it based on your needs at runtime.
  • Now you can set the timeout and max_retries parameters on OpenAI components by setting the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' environment vars or passing them at __init__.
  • The DocumentJoiner component's run method now accepts a top_k parameter, allowing users to specify the maximum number of documents to return at query time. This fixes issue #7702.
  • Enforce JSON mode on OpenAI LLM-based evaluators so that the they always return valid JSON output. This is to ensure that the output is always in a consistent format, regardless of the input.
  • Make warm_up() usage consistent across the codebase.
  • Create a class hierarchy for pipeline classes, and move the run logic into the child class. Preparation work for introducing multiple run stratgegies.
  • Make the SerperDevWebSearch more robust when snippet is not present in the request response.
  • Make SparseEmbedding a dataclass, this makes it easier to use the class with Pydantic
  • `HTMLToDocument`: change the HTML conversion backend from boilerpy3 to trafilatura, which is more robust and better maintained.

⚠️ Deprecation Notes

  • Mulitplexer is now deprecated.
  • DynamicChatPromptBuilder has been deprecated as ChatPromptBuilder fully covers its functionality. Use ChatPromptBuilder instead.
  • DynamicPromptBuilder has been deprecated as PromptBuilder fully covers its functionality. Use PromptBuilder instead.
  • The following parameters of HTMLToDocument are ignored and will be removed in Haystack 2.4.0: extractor_type and try_others.

🐛 Bug Fixes

  • FaithfullnessEvaluator and ContextRelevanceEvaluator now return 0 instead of NaN when applied to an empty context or empty statements.
  • Azure generators components fixed, they were missing the @component decorator.
  • Updates the from_dict method of SentenceTransformersTextEmbedder, SentenceTransformersDocumentEmbedder, NamedEntityExtractor, SentenceTransformersDiversityRanker and LocalWhisperTranscriber to allow None as a valid value for device when deserializing from a YAML file. This allows a deserialized pipeline to auto-determine what device to use using the ComponentDevice.resolve_device logic.
  • Fix the broken serialization of HuggingFaceAPITextEmbedder, HuggingFaceAPIDocumentEmbedder, HuggingFaceAPIGenerator, and HuggingFaceAPIChatGenerator.
  • Fix NamedEntityExtractor crashing in Python 3.12 if constructed using a string backend argument.
  • Fixed the PdfMinerToDocument converter's outputs to be properly wired up to 'documents'.
  • Add to_dict method to DocumentRecallEvaluator to allow proper serialization of the component.
  • Improves/fixes type serialization of PEP 585 types (e.g. list[Document], and their nested version). This improvement enables better serialization of generics and nested types and improves/fixes matching of list[X] and List[X] types in component connections after serialization.
  • Fixed (de)serialization of NamedEntityExtractor. Includes updated tests verifying these fixes when NamedEntityExtractor is used in pipelines.
  • The include_outputs_from parameter in Pipeline.run correctly returns outputs of components with multiple outputs.
  • Return an empty list of answers when ExtractiveReader receives an empty list of documents instead of raising an exception.

Don't miss a new haystack release

NewReleases is sending notifications on new releases.