🚀 KnowNote v1.0.5
This release introduces local LLM support with Ollama, a complete provider system architecture overhaul, and various UI improvements.
✨ Major Features
🤖 Ollama Local LLM Provider
- Run AI models completely offline with Ollama integration
- Support for both chat and embedding capabilities
- OpenAI-compatible API implementation for seamless integration
- Default endpoint:
http://localhost:11434/v1 - Flexible configuration with custom server address support
🏗️ Provider System Architecture Overhaul
- Capability-based design: Modular chat, embedding, rerank, and image generation capabilities
- Config-driven architecture: Simplified provider registration and management
- Unified OpenAI-compatible implementation: Reduced code duplication
- Registry pattern with
ProviderDescriptorandProviderRegistry - Cleaner separation between configuration and runtime logic
🔧 Improvements
Provider Management
- ✅ Fix model name truncation issue - now properly preserves version tags
- Examples:
qwen3:0.6b,gpt-4o-mini:2024-07-18,llama3.2:latest
- Examples:
- ✅ Unified provider URL configuration naming across all providers
- ✅ Enhanced custom provider support with Base URL configuration
- ✅ Improved error handling and validation
UI/UX Enhancements
- 🎨 Custom scrollbar styling for dropdown menus
- 🔧 Improved provider settings panel layout
- 📋 Better model selection interface
🐛 Bug Fixes
- Fixed model ID parsing to support colon-separated version tags
- Improved provider configuration persistence
- Enhanced error messages for better debugging
📦 Technical Details
New Architecture Components
src/main/providers/
├── capabilities/ # Modular capability interfaces
│ ├── BaseProvider.ts
│ ├── ChatCapability.ts
│ ├── EmbeddingCapability.ts
│ ├── RerankCapability.ts
│ └── ImageGenerationCapability.ts
├── handlers/ # Protocol-specific implementations
│ ├── OpenAIChatHandler.ts
│ └── OpenAIEmbeddingHandler.ts
├── registry/ # Provider registration system
│ ├── ProviderDescriptor.ts
│ ├── ProviderRegistry.ts
│ └── builtinProviders.ts
└── base/
└── OpenAICompatibleProvider.ts
Code Changes
- 30 files changed: +1,422 insertions, -723 deletions
- Removed individual provider classes (DeepSeekProvider, KimiProvider, etc.)
- Centralized logic in
OpenAICompatibleProvider - Created reusable handler pattern for API communication
🎯 Supported Providers
| Provider | Chat | Embedding | Notes |
|---|---|---|---|
| Ollama | ✅ | ✅ | NEW - Local LLM runner |
| OpenAI | ✅ | ✅ | GPT-4, GPT-3.5, etc. |
| DeepSeek | ✅ | ✅ | DeepSeek-V3, DeepSeek-Chat |
| Qwen | ✅ | ✅ | Qwen-Max, Qwen-Plus |
| Kimi | ✅ | ❌ | Moonshot AI |
| SiliconFlow | ✅ | ✅ | Model aggregation platform |
📚 Getting Started with Ollama
Installation
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
# Windows: Download from https://ollama.comUsage
# Start Ollama service
ollama serve
# Download models
ollama pull qwen2.5:7b # Chat model
ollama pull nomic-embed-text # Embedding modelConfiguration in KnowNote
- Open Settings → Providers
- Select Ollama
- API Key: Enter any value (e.g.,
ollama) - Base URL: Keep default
http://localhost:11434/v1 - Click "Fetch Models" and select your downloaded models
- Enable the provider
🔄 Migration Notes
For Existing Users
- No breaking changes for existing provider configurations
- Model selections will be automatically preserved
- The architecture changes are internal and backward-compatible
For Developers
- Old provider classes (
OpenAIProvider,DeepSeekProvider, etc.) have been removed - All providers now use
OpenAICompatibleProvideras the base - Custom providers should implement capability interfaces
- See
src/main/providers/registry/builtinProviders.tsfor registration examples
🙏 Acknowledgments
Thanks to the Ollama team for creating an excellent local LLM runtime!
📝 Full Changelog
Commits included in this release:
fdf9062feat: add Ollama local LLM provider support (#7)9f26183style: add custom scrollbar styling to dropdown menus (#6)9e48198refactor: unify provider URL configuration naming6edf9d3refactor: restructure provider system with capability separation and config-driven architecture
Full Changelog: v1.0.4...v1.0.5