github nesquena/hermes-webui v0.50.274
v0.50.274 — LM Studio onboarding fully fixed (#1499 #1500)

latest releases: v0.50.278, v0.50.277, v0.50.276...
7 hours ago

v0.50.274 — LM Studio onboarding fully fixed

Reporters: @chwps and @AdoneyGalvan via #1420 → triaged into #1499 (3 sub-bugs) and #1500.

Three LM Studio onboarding bugs that piled on top of each other in practice, fixed together because fixing only one left the broken UX. PR #1501.

What's fixed

1. Onboarding wizard probes <base_url>/models before persisting (#1499 a)

Pre-fix the wizard finished in 239ms with zero outbound HTTP, silently persisted unreachable URLs, and left users with empty model dropdowns. Now:

  • New POST /api/onboarding/probe endpoint validates the configured base URL
  • 5s timeout, 256 KB body cap, stdlib-only (urllib + socket)
  • 8 stable error codes (invalid_url, dns, connect_refused, timeout, http_4xx, http_5xx, parse, unreachable) each get a localized hint — e.g. the connect_refused message tells Docker users to try the host IP instead of localhost
  • Refuses HTTP redirects (SSRF defense-in-depth)
  • Probe-discovered models populate the wizard's model dropdown
  • Frontend wires the probe debounced (400ms on baseUrl input) AND blocking (Continue refuses to advance until probe ok for requires_base_url=True providers)

2. Keyless setup is a first-class state for self-hosted providers (#1499 third sub-bug)

Pre-fix the wizard rejected an empty api_key for lmstudio / ollama / custom, forcing keyless users to type random gibberish into a password field. Now:

  • New key_optional: True flag on those three providers
  • Empty api_key accepted, no .env placeholder written, provider_ready=True based on base_url alone
  • Cloud providers (openrouter, anthropic, openai, gemini, deepseek, …) remain key-required
  • Frontend renders the field as "API key (optional)" with placeholder "Leave blank for keyless servers" and an italic muted help paragraph: "Most LM Studio / Ollama / vLLM installs run keyless — leave this blank if your server doesn't require authentication. Use the Test connection button to verify."

3. Webui env var aligned with the agent CLI's canonical LM_API_KEY (#1500)

Pre-fix the WebUI wrote LMSTUDIO_API_KEY to .env, but the agent CLI runtime (hermes_cli/auth.py:182, api_key_env_vars=("LM_API_KEY",)) reads LM_API_KEY. Auth-enabled LM Studio users got Settings reporting has_key=True but agent runtime returning 401 on chat. Now:

  • Onboarding writes the canonical LM_API_KEY
  • Legacy LMSTUDIO_API_KEY preserved as a read-only fallback in _PROVIDER_ENV_VAR_ALIASES so existing users don't see Settings flip to "no key" on upgrade
  • The alias mechanism is general — future env-var renames get the same gentle-migration path

⚠️ Migration note for existing LM Studio users on auth-enabled servers

If you're an existing user with LMSTUDIO_API_KEY set in ~/.hermes/.env AND your LM Studio server requires authentication:

  • Settings → Providers will continue to report has_key=True after upgrade (via the legacy alias). ✅
  • But chat will keep failing with 401 the same way it did before this PR — because the agent runtime has always read LM_API_KEY, never LMSTUDIO_API_KEY.

Fix: rename the variable in ~/.hermes/.env from:

LMSTUDIO_API_KEY=your-key-here

to:

LM_API_KEY=your-key-here

This is a one-time edit. Re-running onboarding would also do this for you (it now writes the canonical name).

If your LM Studio runs without authentication, you don't need to do anything — the agent's no-auth fallback (LMSTUDIO_NOAUTH_PLACEHOLDER) handles that case automatically.

Test posture

3879 → 3918 tests passing (+39 new):

  • 17 probe error-code tests (mutation-verified mock servers for each code)
  • 16 keyless-onboarding tests (schema flag + empty-key acceptance + cloud-provider regression-defense)
  • 5 env-var alignment tests (canonical/alias declaration, write-only-canonical, legacy-still-detected)
  • 1 redirect-refusal test (mutation-verified)
  • 5 #1420 tests updated for the canonical-name rename

Follow-ups

  • #1502 — Sunset path tracking for the LMSTUDIO_API_KEY legacy alias (target review ~Nov 2026)
  • #1503 — UX papercut: API-key input can lose focus mid-typing if probe completes during a typing pause (Opus pre-release advisor flagged; non-blocking; three concrete fix-shape options listed in the issue)

Independent review

nesquena APPROVED with 4 non-blocking observations:

  1. Redirect-disable on probe path (SSRF defense-in-depth) — fixed in-release as commit ba6f344 per reviewer-flagged-fix-in-release-not-followup policy
  2. Test count drift in PR description — cosmetic
  3. Legacy alias sunset path — filed as #1502
  4. Local-network gate code duplicated between two routes — deferred to a future helper extraction

Pre-release Opus advisor

Verdict: ship-ready, no MUST-FIX. All 5 risk areas in the brief checked out (cross-tool consistency, migration cliff, SSRF posture, key_optional for custom, i18n coverage). One additional UX observation flagged and deferred as #1503.

Full Changelog: v0.50.273...v0.50.274

Don't miss a new hermes-webui release

NewReleases is sending notifications on new releases.