github cloudflare/workers-sdk miniflare@4.20260420.0

Minor Changes

  • #13326 4a9ba90 Thanks @mattzcarey! - Add Artifacts binding support to wrangler

    You can now configure Artifacts bindings in your wrangler configuration:

    // wrangler.jsonc
    {
      "artifacts": [{ "binding": "MY_ARTIFACTS", "namespace": "default" }]
    }

    Type generation produces the correct Artifacts type reference from the workerd type definitions:

    interface Env {
      MY_ARTIFACTS: Artifacts;
    }
  • #12600 50bf819 Thanks @penalosa! - Use workerd's debug port to power cross-process service bindings, Durable Objects, and tail workers via the dev registry. This enables Durable Object RPC via the dev registry, and is an overall stability improvement.

Patch Changes

  • #13515 b35617b Thanks @petebacondarwin! - fix: close all open handles on dispose to prevent process hangs

    Several resources were not being properly cleaned up during Miniflare.dispose(), which could leave the Node.js event loop alive and cause processes (particularly tests using node --test) to hang instead of exiting cleanly:

    • The internal undici Pool used to dispatch fetch requests to the workerd runtime was not closed. Lingering TCP sockets from this pool could keep the event loop alive indefinitely.
    • WebSocketServer instances for live reload and WebSocket proxying were never closed, leaving connected clients' sockets open.
    • The InspectorProxy was not closing its runtime WebSocket connection, relying on process death to break the connection.
    • HyperdriveProxyController.dispose() had a missing return in a .map() callback, causing Promise.allSettled to resolve immediately without waiting for net.Server instances to close.
    • ProxyClientBridge was not clearing its finalization batch setTimeout during disposal.
    • InspectorProxyController.dispose() was not calling server.closeAllConnections() before server.close(), so active HTTP keep-alive or WebSocket connections could prevent the close callback from firing.
  • #13557 8ca78bb Thanks @dependabot! - Update dependencies of "miniflare", "wrangler"

    The following dependency versions have been updated:

    Dependency From To
    workerd 1.20260415.1 1.20260416.2
  • #13579 b6e1351 Thanks @dependabot! - Update dependencies of "miniflare", "wrangler"

    The following dependency versions have been updated:

    Dependency From To
    workerd 1.20260416.2 1.20260417.1
  • #13604 d8314c6 Thanks @petebacondarwin! - Update dependencies of "miniflare", "wrangler"

    The following dependency versions have been updated:

    Dependency From To
    workerd 1.20260417.1 1.20260420.1
  • #12913 7f50300 Thanks @Sigmabrogz! - fix(miniflare): use 127.0.0.1 for internal loopback when localhost is configured

    When localhost is configured as the host, Node.js may bind to [::1] (IPv6) while workerd resolves localhost to 127.0.0.1 (IPv4) first. This mismatch causes connection refused errors and 100% CPU spins.

    This fix ensures the internal loopback communication between Node.js and workerd always uses 127.0.0.1 when localhost is configured, while preserving the user-facing URL as localhost.

  • #13470 4fda685 Thanks @penalosa! - fix: prevent remote binding sessions from expiring during long-running dev sessions

    Preview tokens for remote bindings expire after one hour. Previously, the first request after expiry would fail before a refresh was triggered. This change proactively refreshes the token at 50 minutes so no request ever sees an expired session.

    The reactive recovery path is also improved: error code: 1031 responses (returned by bindings such as Workers AI when their session times out) now correctly trigger a refresh, where previously only Invalid Workers Preview configuration HTML responses did.

    Auth credentials are now resolved lazily when a remote proxy session starts rather than at bundle-complete time. This means that if your OAuth access token has been refreshed since wrangler dev started, the new token is used rather than the one captured at startup.

  • #13586 be5e6a0 Thanks @petebacondarwin! - Fix resource leaks during config updates

    Two follow-up fixes to the dispose cleanup in #13515:

    • Only close and recreate the dev-registry dispatcher when its port actually changes, matching the existing runtimeDispatcher behavior. Previously, every config update unconditionally tore down and rebuilt the connection pool, which could cause brief request failures if a registry push was in-flight.
    • Dispose old InspectorProxy instances before replacing them during updateConnection(). Previously, stale proxies were silently discarded, leaking their runtime WebSocket connections and 10-second keepalive interval timers.
  • #13577 e456952 Thanks @connyay! - Return EmailSendResult from the send_email binding's send() in local mode

    The binding's send() used to resolve to undefined. It now returns { messageId: string }, the same shape as the public SendEmail type in production. Workers that read the return value (for logging, or to pass the id downstream) no longer get undefined under miniflare.

    Both branches synthesize an id in the shape production returns — <{36 alphanumeric chars}@{sender domain}>, angle brackets included — using the envelope from for the EmailMessage path and the builder's from for the MessageBuilder path. Production synthesizes its own id rather than echoing anything submitted, so miniflare does the same.

  • #13516 4eb1da9 Thanks @jonnyparris! - Rename "Browser Rendering" to "Browser Run" in all user-facing strings, error messages, and CLI output.

  • #13557 8ca78bb Thanks @dependabot! - Rename Flags type to Flagship to match the upstream rename in @cloudflare/workers-types

    The Flags type was renamed to Flagship in @cloudflare/workers-types. This updates the import and the return type of getFlagshipBinding accordingly.

  • #11849 266c418 Thanks @43081j! - Removed unused devDependencies from miniflare package.

Don't miss a new workers-sdk release

NewReleases is sending notifications on new releases.