What's New
- Fixed O(N) chunk scan in context pack search:
searchContextPacksnow uses direct index lookup instead of re-scanning the entire chunks array with string comparison after each search result - Fixed multi-query reranker only using first query: When search was called with multiple queries, the Voyage reranker was only scoring candidates against the first query. All queries are now joined before reranking.
- Parallelized Ollama embedding batches: Ollama batches now fire concurrently via
withTaskGroupsince it runs locally with no rate limits. Cloud providers (Voyage, OpenAI-compatible) keep the sequential loop. Large KB indexing with Ollama should be noticeably faster.
Contributors
Thanks to @genaardo for all three fixes in this release (#328)!