What's Changed
- fix: raise LLM request timeout to 300s to unblock slow completions (#434) — URLRequest's default 60s idle timeout was killing notes generation and other LLM calls when using cold local models (Ollama/MLX) or reasoning models with long first-token latency. Both streaming and non-streaming paths in OpenRouterClient now use a 300s timeout.
Contributors
Thanks to @BJonny for identifying and fixing the timeout issue.