BREAKING: AI-based summarization and translation now need a full path instead of just endpoint URL.
e.g. for OpenAI services, use
https://api.openai.com/v1/chat/completions. for Ollama, usehttp://localhost:11434/api/generate.
Added
- Supported ollama and other local LLMs for AI-based translation and summarization. (#251)
- Supported limits and quotas for AI services to control usage and costs. (#252)
- Supported hover to mark articles as read in article list. (#250)
- Supported deelpx translation service. (#247)
Changed
- Improved AI settings UI/UX for better user experience.
- Refactored docs and workflows to improve maintainability and clarity.
- AI translation and summarization are now cached to reduce redundant requests and improve performance.
- Recent articles are now cached to improve loading speed.
- When AI functionality gets errors, fallback to local summarization/translation automatically.
Fixed
- Fixed the issue where some opml files cannot be imported and outported correctly. (#249)
- Fixed the issue where proxy settings were not applied correctly for feed fetching. (#256)
- Fixed the issue where software print too much debug logs in production builds.
- Fixed the issue where network connection test fails when some test endpoints are unreachable. (#256)
- Fixed the issue where summarization failures will affect article content rendering. (#242)
- Fixed the issue where article content fetching blocked by feed refreshes.
- Fixed the issue of dark mode styles on Linux platform.