github HKUDS/LightRAG v1.4.7

latest releases: v1.4.9.5, v1.4.9.4, v1.4.9.4rc1...
2 months ago

Important Notes

  • Doc-id based chunk filtering feature is remove from PostgreSQL vector storage.
  • Prompt template has been updated, invalidating all LLM caches
  • The default value of the FORCE_LLM_SUMMARY_ON_MERGE environment variable has been changed from 4 to 8. This adjustment significantly reduces the number of LLM calls during the documentation indexing phase, thereby shortening the overall document processing time.
  • Added support for multiple Rerank Providers (Cohere AI, Jina AI, Aliyun Dashscope). If Rerank was previously enabled, and new env var must be set to enable rerank again:
RERANK_BINDING=cohere
  • Introduced a new environment variable, LLM_TIMEOUT, to specifically control the Large Language Model (LLM) timeout. The existing TIMEOUT variable now exclusively manages the Gunicorn worker timeout. The default LLM timeout is set to 180 seconds. If you previously relied on the TIMEOUT variable for LLM timeout configuration, please update your settings to use LLM_TIMEOUT instead:
LLM_TIMEOUT=180
  • Add comprehensive environment variable settings for OpenAI and Ollama Large Language Model (LLM) Bindings.
    The generic TEMPERATURE environment variable for LLM temperature control has been deprecated. Instead, LLM temperature is now configured using binding-specific environment variables:
# Temperature setting for OpenAI binding
OPENAI_LLM_TEMPERATURE=0.8

# Temperature setting for Ollama binding
OLLAMA_LLM_TEMPERATURE=1.0

To mitigate endless output loops and prevent greedy decoding for Qwen3, set the temperature parameter to a value between 0.8 and 1.0. To disable the model's "Thinking" mode, please refer to the following configuration:

### Qwen3 Specific Parameters depoly by vLLM
# OPENAI_LLM_EXTRA_BODY='{"chat_template_kwargs": {"enable_thinking": false}}'

### OpenRouter Specific Parameters
# OPENAI_LLM_EXTRA_BODY='{"reasoning": {"enabled": false}}'

For a full list of support options, use the following command:

lightrag-server --llm-binding openai --help
lightrag-server --llm-binding ollama --help
lightrag-server --embedding-binding ollama --help
  • A Full list of new env vars added or new default values:
# Timeout for LLM requests (seconds)
LLM_TIMEOUT=180

# Timeout for embedding requests (seconds)
EMBEDDING_TIMEOUT=30

### Number of summary segments or tokens to trigger LLM summary on entity/relation merge (at least 3 is recommended)
FORCE_LLM_SUMMARY_ON_MERGE=8

### Max description token size to trigger LLM summary
SUMMARY_MAX_TOKENS = 1200

### Recommended LLM summary output length in tokens
SUMMARY_LENGTH_RECOMMENDED=600

### Maximum context size sent to LLM for description summary
SUMMARY_CONTEXT_SIZE=12000

### RERANK_BINDING type:  null, cohere, jina, aliyun
RERANK_BINDING=null

### Enable rerank by default in query params when RERANK_BINDING is not null
# RERANK_BY_DEFAULT=True

### chunk selection strategies
###     VECTOR: Pick KG chunks by vector similarity, delivered chunks to the LLM aligning more closely with naive retrieval
###     WEIGHT: Pick KG chunks by entity and chunk weight, delivered more solely KG related chunks to the LLM
###     If reranking is enabled, the impact of chunk selection strategies will be diminished.
KG_CHUNK_PICK_METHOD=VECTOR

### Entity types that the LLM will attempt to recognize
ENTITY_TYPES=["person", "organization", "location", "event", "concept"]

What's News

What''s Fixed

  • Fix ollama stop option handling and enhance temperature configuration by @danielaskdd in #1909
  • Feat: Change embedding formats from float to base64 for efficiency by @danielaskdd in #1913
  • Refact: Optimized LLM Cache Hash Key Generation by Including All Query Parameters by @danielaskdd in #1915
  • Fix: Unify document chunks context format in only_need_context query by @danielaskdd in #1923
  • Fix: Update OpenAI embedding handling for both list and base64 embeddings by @danielaskdd in #1928
  • Fix: Initialize first_stage_tasks and entity_relation_task to prevent empty-task cancel errors by @danielaskdd in #1931
  • Fix: Resolve workspace isolation issues across multiple storage implementations by @danielaskdd in #1941
  • Fix: remove query params from cache key generation for keyword extraction by @danielaskdd in #1949
  • Refac: uniformly protected with the get_data_init_lock for all storage initializations by @danielaskdd in #1951
  • Fixes crash when processing files with UTF-8 encoding error by @danielaskdd in #1952
  • Fix Document Selection Issues After Pagination Implementation by @danielaskdd in #1966
  • Change the status from PROCESSING/FAILED to PENDING at the beginning of document processing pipeline by @danielaskdd in #1971
  • Refac: Increase file_path field length to 32768 and add schema migration for Milvus DB by @danielaskdd in #1975
  • Optimize keyword extraction prompt, and remove conversation history from keyword extraction by @danielaskdd in #1977
  • Fix(UI): Implement XLSX format upload support for web UI by @danielaskdd in #1982
  • Fix: resolved UTF-8 encoding error during document processing by @danielaskdd in #1983
  • Fix: Preserve Document List Pagination During Pipeline Status Changes by @danielaskdd in #1992
  • Update README-zh.md by @OnesoftQwQ in #1989
  • Fi: Added import of OpenAILLMOptions when using azure_openai by @thiborose in #1999
  • fix(webui): resolve document status grouping issue in DocumentManager by @danielaskdd in #2013
  • fix mismatch of 'error' and 'error_msg' in MongoDB by @LinkinPony in #2009
  • Fix UTF-8 Encoding Issues Causing Document Processing Failures by @danielaskdd in #2017
  • docs(config): fix typo in .env comments by @SandmeyerX in #2021
  • fix: adjust the EMBEDDING_BINDING_HOST for openai in the env.example by @pedrofs in #2026

New Contributors

Full Changelog: v1.4.6...v1.4.7

Don't miss a new LightRAG release

NewReleases is sending notifications on new releases.