Windows Ollama localhost fallback + Vulkan assist visibility
- retry Ollama availability across localhost loopback candidates and persist the working base URL
- filter fake Windows remote display adapters from fallback GPU inventory
- surface Vulkan runtime assist metadata for integrated Windows GPU paths
- improve hw-detect output for integrated/shared-memory acceleration paths
- add regression tests for loopback fallback and Windows GPU reporting
Published npm package: llm-checker@3.5.8