github deepset-ai/haystack v1.17.0

latest releases: v1.26.4, v2.7.0, v2.7.0-rc1...
17 months ago

⭐ Highlights

🗣️ Introducing ConversationalAgent

Great news! We’re introducing the ConversationalAgent – a type of Agent specifically implemented to create chat applications! With its memory integration, the new ConversationalAgent enables human-like conversation with large language models (LLMs). If you’re worried about the token limit of your model, there is an option to condense the chat history with ConversationSummaryMemory before injecting the history into the prompt.

To get started, just initialize ConversationalAgent with a PromptNode and start chatting.

summary_memory = ConversationalSummaryMemory(prompt_node=prompt_node)

conversational_agent = ConversationalAgent(
	prompt_node=prompt_node, 
	memory=summary_memory
)
conversational_agent.run(user_input)

To try it out, check out the new ConversationalAgent Tutorial, see the full example, or visit our documentation!

🎉 Now using transformers 4.29.1

With this release, Haystack depends on the latest version of the transformers library.

🧠 More LLMs

Haystack now supports command from Cohere and claude from Anthropic!

🤖 New error reporting strategy around 3rd-party dependencies

One of the challenges with a multi-purpose NLP framework like Haystack is to find the sweet spot of a turn-key solution implementing multiple NLP use cases without getting into dependency hell. With the new features around generative AI recently shipped, we got several requests about avoiding pulling in too many unrelated dependencies when, say, one just needs a PromptNode.
We heard your feedback and lowered the number of packages a simple pip install farm-haystack pulls in (and we'll keep doing it)! To keep the user experience as smooth as possible, by using the generalimports library, we defer dependency errors from "import time" down to "actual usage time" – so that you don't have to ask yourself "Why do I need this database client to run PromptNode?" anymore.

⚠️ MilvusDocumentStore Deprecated in Haystack

With Haystack 1.17, we have moved the MilvusDocumentStore out of the core haystack project, and we will maintain it in the haystack-extras repo. To continue using Milvus, check out the instructions on how to install the package separately in its readme.

What's Changed

⚠️ Breaking Changes

  • refactor: Update schema objects to handle Dataframes in to_{dict,json} and from_{dict,json} by @sjrl in #4747
  • chore: remove deprecated MilvusDocumentStore by @masci in #4951
  • chore: remove BaseKnowledgeGraph by @masci in #4953
  • chore: remove deprecated node PDFToTextOCRConverter by @masci in #4982

Pipeline

DocumentStores

  • fix: Add support for _split_overlap meta to Pinecone and dict metadata in general to Weaviate by @bogdankostic in #4805
  • fix: str issues in squad_to_dpr by @PhilipMay in #4826
  • feat: introduce generalimport by @ZanSara in #4662
  • feat: Support authentication using AuthBearerToken and AuthClientCredentials in Weaviate by @hsm207 in #4028

Documentation

  • fix: loads local HF Models in PromptNode pipeline by @saitejamalyala in #4670
  • fix: README latest and main installation by @dfokina in #4741
  • fix: SentenceTransformersRanker's predict_batch returns wrong number of documents by @vblagoje in #4756
  • feat: add Google API to search engine providers by @Pouyanpi in #4722
  • bug: fix filtering in MemoryDocumentStore (v2) by @ZanSara in #4768
  • refactor: Extract ToolsManager, add it to Agent by composition by @vblagoje in #4794
  • chore: move custom linter to a separate package by @masci in #4790
  • refactor!: Deprecate name param in PromptTemplate and introduce template_name instead by @bogdankostic in #4810
  • chore: revert Deprecate name param in PromptTemplate and introduce prompt_nameinstead by @bogdankostic in #4834
  • chore: remove optional imports in v2 by @ZanSara in #4855
  • test: Update unit tests for schema by @sjrl in #4835
  • feat: allow filtering documents on all fields (v2) by @ZanSara in #4773
  • feat: Add Anthropic invocation layer by @silvanocerza in #4818
  • fix: improve Document comparison (v2) by @ZanSara in #4860
  • feat: Add Cohere PromptNode invocation layer by @vblagoje in #4827
  • fix: Support for gpt-4-32k by @dnetguru in #4825
  • fix: Document v2 JSON serialization by @ZanSara in #4863
  • fix: Dynamic max_answers for SquadProcessor (fixes IndexError when max_answers is less than the number of answers in the dataset) by @benheckmann in #4817
  • feat: Add agent memory by @vblagoje in #4829
  • fix: Make sure summary memory is cumulative by @vblagoje in #4932
  • feat: Add conversational agent by @vblagoje in #4931
  • docs: Small fix to PromptTemplate API docs by @sjrl in #4870
  • build: Remove mmh3 dependency by @julian-risch in #4896
  • docstrings update in web.py by @dfokina in #4921
  • feat: Add max_tokens to BaseGenerator params by @vblagoje in #4168
  • fix: change parameter name to request_with_retry by @ZanSara in #4950
  • fix: Adjust tool pattern to support multi-line inputs by @vblagoje in #4801
  • feat: enable passing generation_kwargs to the PromptNode in pipeline.run() by @faaany in #4832
  • fix: Remove streaming LLM tracking; they are all streaming now by @vblagoje in #4944
  • feat: HFInferenceEndpointInvocationLayer streaming support by @vblagoje in #4819
  • fix: Fix request_with_retry kwargs by @silvanocerza in #4980

Other Changes

New Contributors

Full Changelog: v1.16.1...v1.17.0-rc1

Don't miss a new haystack release

NewReleases is sending notifications on new releases.