v3.5.9
- Fixed the remaining Ollama localhost bypasses in selector flows.
- Deterministic speed probes now use the shared Ollama client instead of a hardcoded
http://localhost:11434endpoint. - AI evaluator chat requests now use the same resolved Ollama base URL path as the rest of the CLI.
- Added selector-specific regression coverage for Windows-style
localhostfailure with successful127.0.0.1fallback. - The separate Windows backend wording question (
Best backend: cpuwithRuntime assist: Vulkan) remains tracked in #71.
npm:
llm-checker@3.5.9