github run-llama/llama_index v0.9.12

latest releases: v0.11.12, v0.11.11, v0.11.10...
9 months ago

New Features

  • Added an option reuse_client to openai/azure to help with async timeouts. Set to False to see improvements (#9301)
  • Added support for vLLM llm (#9257)
  • Add support for python 3.12 (#9304)
  • Support for claude-2.1 model name (#9275)

Bug Fixes / Nits

  • Fix embedding format for bedrock cohere embeddings (#9265)
  • Use delete_kwargs for filtering in weaviate vector store (#9300)
  • Fixed automatic qdrant client construction (#9267)

Don't miss a new llama_index release

NewReleases is sending notifications on new releases.