github Pavelevich/llm-checker v3.5.9

6 hours ago

v3.5.9

  • Fixed the remaining Ollama localhost bypasses in selector flows.
  • Deterministic speed probes now use the shared Ollama client instead of a hardcoded http://localhost:11434 endpoint.
  • AI evaluator chat requests now use the same resolved Ollama base URL path as the rest of the CLI.
  • Added selector-specific regression coverage for Windows-style localhost failure with successful 127.0.0.1 fallback.
  • The separate Windows backend wording question (Best backend: cpu with Runtime assist: Vulkan) remains tracked in #71.

npm:

  • llm-checker@3.5.9

Don't miss a new llm-checker release

NewReleases is sending notifications on new releases.