v1.5.0-rc.22
[1.5.0-rc.22] — 2026-05-15
Added
- All 16 non-English locales now have full key parity with the English source (commits
5e463631,012dcb83). Two complementary passes bring every translation up to date. The first pass (5e463631) filled gaps in the ten locales that were already mostly translated (de, es, fr, it, nl, pl, pt-BR, tr, zh-CN, zh-TW) — each was missingnotificationOutboxView.jsonentirely and had drifted behind recent string extractions inlistViews.json,containerComponents.json,containersView.json, anddashboardView.json(new keys:digestLabel,blockedTagvariants,manualUpdateOnlyvariants,narrowViewportSuffix,autoHiddenBadgeTooltip, queued-update toast variants,recentUpdates.widgetAria). A JSON-breaking typo inde/dashboardView.json(straight quote instead of closing curly quote) was also corrected. The second pass (012dcb83) gave the six stub locales (ar, ja, ko, ru, uk, vi) — which had been scaffolded with English placeholders since rc.20 — a full translation pass across all 13 namespace files plus the newnotificationOutboxView.json. Brand names, acronyms, and interpolation placeholders are preserved verbatim; DevOps terminology follows each language's established conventions.
Changed
- Playwright E2E tests moved to a dedicated workflow file (
e2e-playwright.yml) (commitf0989301). OSSF Scorecard's CI-Tests check scores from the github-actions Check Suite conclusion, not individual check-run conclusions. Because every job inci-verify.ymlrolled into a single suite, one failing Playwright assertion would flip the entire suite to failure and cause Scorecard to mark merged PRs as untested — even when all other jobs were green (manifesting as code-scanning alert #43, CI-Tests score 9/10). Each workflow file gets its own Check Suite per commit; isolating Playwright intoe2e-playwright.ymlmeans a Playwright failure no longer drags theci-verifysuite down for Scorecard's purposes. Branch protection continues to gate on the "🎭 E2E: Playwright" status check (matched by job name, not workflow file), andrelease-cut.ymlnow polls both workflows on the target SHA so releases still require Playwright success.
Fixed
-
#368 — OIDC custom-dispatcher paths (cafile /
DD_AUTH_OIDC_*_INSECURE=true) no longer fail with an opaqueTypeError: fetch failedon Node 24. Node 24 ships built-in undici 7.21.0 (v1 dispatcher interface) while the app's userlandundici@8(bumped in rc.20) exposes anAgentwith the v2 dispatcher interface. The OIDC custom fetch was constructing the v2Agentfrom userland undici and passing it asdispatcherto Node's globalfetch, which is bound to the built-in undici 7. The v2Agent's handlers don't satisfy the v1 contract, so the request silently fails — the surface symptom reported by a user upgrading rc.19 → rc.21 against self-signed Authentik withDD_AUTH_OIDC_AUTHENTIK_INSECURE=true. The undici project'sDispatcher1Wrapperbridge (nodejs/undici#4827) covers this mismatch on Node 22 but is absent on Node 24. The fix importsfetchfromundiciand uses it whenever a custom dispatcher is required (cafile or insecure path) so both halves share the same dispatcher version. The non-insecure code path is unchanged — openid-client continues to use its default fetch when no custom dispatcher is needed. A strict-tsctype error introduced in the same fix (undici's nominalRequestInfo/Responsetypes differ from thelib.domtypes thatopenid-client'sCustomFetchis typed against) was resolved by casting throughunknownat the boundary usingParameters<typeof undiciFetch>andReturnType<openidClientLibrary.CustomFetch>; there is no runtime behavior change. -
OIDC warn logs now surface the full
error.causechain, making TLS and DNS failures actionable (commit720d99a3).undici's fetch surfaces failures as a genericTypeError: fetch failed; the actionable diagnostic (ENOTFOUND,ECONNREFUSED,UNABLE_TO_VERIFY_LEAF_SIGNATURE, etc.) lives onerror.cause, sometimes nested. The previous error sanitizer logged only the top-level message, so issue #368 reached us with only"Unable to initialize OIDC session (fetch failed)"— no indication whether DNS, TLS, or routing was at fault. A newgetErrorChainMessagehelper walkserror.causeup to depth 5, joining parts with←and appending[code]when acodeproperty is present; aWeakSetguards against cyclic cause chains.sanitizeOidcErrorMessagenow uses it so all OIDC warn logs include the cause chain (still passed through the existing URL and token redaction). This is a forward-only diagnostic improvement with no runtime behavior change for healthy OIDC paths. -
#362 — SSE reconnect exponential backoff no longer collapses to a flat 1 s loop when the agent is struggling.
AgentClient.startSse()previously calledthis.reconnectAttempts = 0the instant the axios response headers arrived — before the stream had proven it could stay open. A crash-looping agent, a reverse-proxy with a short upstream idle timeout, or any situation where the SSE stream returned HTTP 200 and then ended almost immediately would cycle as: connect → 200 →reconnectAttempts = 0→ stream ends →scheduleReconnect()(delay = 1 000 ms, attempts → 1) → 1 s later connect → 200 →reconnectAttempts = 0again — and so on forever. The user who filed #362 sawSSE stream ended. Reconnecting...in their controller logs every ~1.00 s indefinitely, with no escalation. The backoff now only resets after the stream has stayed open forSSE_STABLE_CONNECTION_MS(30 s). AsetTimeoutis armed when the response arrives and cancelled byscheduleReconnect()if the stream ends or errors before the window expires; streams that end early therefore keep their accumulatedreconnectAttemptsand the delay continues to double up to the 60 s cap as intended.