github nats-io/nats-server v2.11.12
Release v2.11.12

latest release: v2.12.4
one day ago

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version

Dependencies

  • github.com/nats-io/nkeys v0.4.12 (#7578)
  • github.com/antithesishq/antithesis-sdk-go v0.5.0-default-no-op (#7604)
  • github.com/klauspost/compress v1.18.3 (#7736)
  • golang.org/x/crypto v0.47.0 (#7736)
  • golang.org/x/sys v0.40.0 (#7736)
  • github.com/google/go-tpm v0.9.8 (#7696)
  • github.com/nats-io/nats.go v1.48.0 (#7696)

Added

General

  • Added WebSocket-specific ping interval configuration with ping_internal in the websocket block (#7614)

Monitoring

  • Added tls_cert_not_after to the varz monitoring endpoint for showing when TLS certificates are due to expire (#7709)

Improved

JetStream

  • The scan for the last sourced message sequence when setting up a subject-filtered source is now considerably faster (#7553)
  • Consumer interest checks on interest-based streams are now significantly faster when there are large gaps in interest (#7656)
  • Creating consumer file stores no longer contends on the stream lock, improving consumer create performance on heavily loaded streams (#7700)
  • Recalculating num pending with updated filter subjects no longer gathers and sorts the subject filter list twice (#7772)
  • Switching to interest-based retention will now remove no-interest messages from the head of the stream (#7766)

MQTT

  • Retained messages will now work correctly even when sourced from a different account and has a subject transform (#7636)

Fixed

General

  • WebSocket connections will now correctly limit the buffer size during decompression (#7625, thanks to Pavel Kokout at Aisle Research)
  • The config parser now correctly detects and errors on self-referencing environment variables (#7737)
  • Internal functions for handling headers should no longer corrupt message bodies if appended (#7752)

JetStream

  • A protocol error caused by an invalid transform of acknowledgement reply subjects when originating from a gateway connection has been fixed (#7579)
  • The meta layer will now only respond to peer remove requests after quorum has been reached (#7581)
  • Invalid subject filters containing non-terminating full wildcard no longer produce unexpected matches (#7585)
  • A data race when creating a stream in clustered mode has been fixed (#7586)
  • A panic when processing snapshots with missing nodes or assignments has been fixed (#7588)
  • When purging whole message blocks, the subject tracking and scheduled messages are now updated correctly (#7593)
  • The filestore will no longer unexpectedly lose writes when AsyncFlush is enabled after a process pause (#7594)
  • The filestore now will process message removal on disk before updating accounting, which improves error handling (#7595, #7601)
  • Raft will no longer allow peer-removing the one remaining peer (#7610)
  • A data race has been fixed in the stream health check (#7619)
  • Tombstones are now correctly written for recovering the sequences after compacting or purging an almost-empty stream to seq 2 (#7627)
  • Combining skip sequences and compactions will no longer overwrite the block at the wrong offset, correcting a corrupt record state error (#7627)
  • Compactions that reclaim over half of the available space now use an atomic write to avoid losing messages if killed (#7627)
  • Filestore compaction should no longer result in no idx present cache errors (#7634)
  • Filestore compaction now correctly adjusts the high and low sequences for a message block, as well as cleaning up the deletion map accordingly (#7634)
  • Potential stream desyncs that could happen during stream snapshotting have been fixed (#7655)
  • Raft will no longer allow multiple membership changes to take place concurrently (#7565, #7609)
  • Raft will no longer count responses from peer-removed nodes towards quorum (#7589)
  • Raft quorum counting has been refactored so the implicit leader ack is now only counted if still a part of the membership (#7600)
  • Raft now writes the peer state immediately when handling a peer-remove to ensure the removed peers cannot unexpectedly reappear after a restart (#7602)
  • Raft will no longer allow peer-removing the one remaining peer (#7610)
  • Add peer operations to Raft can no longer result in disjoint majorities (#7632)
  • Raft groups should no longer readmit a previously removed peer if a heartbeat occurs between the peer removal and the leadership transfer (#7649)
  • Raft single node elections now transition into leader state correctly (#7642)
  • R1 streams will no longer incorrectly drift last sequence when exceeding limits (#7658)
  • Deleted streams are no longer wrongfully revived if stalled on an upper-layer catchup (#7668)
  • A panic that could happen when receiving a shutdown signal while JetStream is still starting up has been fixed (#7683)
  • JetStream usage stats now correctly reflect purged whole blocks when optimising large purges (#7685)
  • Recovering JetStream encryption keys now happens independently of the stream index recovery, fixing some cases where the key could be reset unexpectedly if the index is rebuilt (#7678)
  • Non-replicated file-based consumers now detect corrupted state on disk and are deleted automatically (#7691)
  • Raft no longer allows a repeat vote for the same term after a stepdown or leadership transfer (#7698)
  • Replicated consumers are no longer incorrectly deleted if they become leader just as JetStream is about to shut down (#7699)
  • Fixed an issue where a single truncated block could prevent storing new messages in the filestore (#7704)
  • Fixed a concurrent map iteration/write panic that could occur on WorkQueue streams during partitioning (#7708)
  • Fixed a deadlock that could occur on shutdown when adding streams (#7710)
  • A data race on mirror consumers has been fixed (#7716)
  • JetStream no longer leaks subscriptions in a cluster when a stream import/export is set up that overlaps the $JS.> namespace (#7720)
  • The filestore will no longer waste CPU time rebuilding subject state for WALs (#7721)
  • Configuring cluster_traffic in config mode has been fixed (#7723)
  • Subject intersection no longer misses certain subjects with specific patterns of overlapping filters, which could affect consumers, num pending calculations etc (#7728, #7741, #7744, #7745)
  • Multi-filtered next message lookups in the filestore can now skip blocks when faster to do so (#7750)
  • The binary search for start times now handles deleted messages correctly (#7751)
  • Consumer updates will now only recalculate num pending when the filter subjects are changed (#7753)
  • Consumers on replicated interest or workqueue streams should no longer lose interest or cause desyncs after having their filter subjects updated (#7773)
  • Interest-based streams will no longer start more check interest state goroutines when there are existing running ones (#7769)

MQTT

  • The maximum payload size is now correctly enforced for MQTT clients (#7555, thanks to @yixianOu)
  • Fixed a panic that could occur when reloading config if the user did not have permission to access retained messages (#7596)
  • Fixed account mapping for JetStream API requests when traversing non-JetStream-enabled servers (#7598)
  • QoS0 messages are now mapped correctly across account imports/exports with subject mappings (#7605)
  • Loading retained messages no longer fails after restarting due to last sequence checks (#7616)
  • A bug which could corrupt retained messages in clustered deployments has been fixed (#7622)
  • Permissions to $MQTT. subscriptions are now handled implicitly, with the exception of deny ACLs which still permit restriction (#7637)
  • A bug where QoS2 messages could not be retrieved after a server restart has been fixed (#7643)

Complete Changes

v2.11.11...v2.11.12

Don't miss a new nats-server release

NewReleases is sending notifications on new releases.