github nats-io/nats-server v2.10.26
Release v2.10.26

latest release: v2.11.0-RC.1
one day ago

Changelog

Refer to the 2.10 Upgrade Guide for backwards compatibility notes with 2.9.x.

Go Version

Dependencies

  • github.com/nats-io/nats.go v1.39.1 (#6574)
  • golang.org/x/crypto v0.34.0 (#6574)
  • golang.org/x/sys v0.30.0 (#6487)
  • golang.org/x/time v0.10.0 (#6487)
  • github.com/nats-io/nkeys v0.4.10 (#6494)
  • github.com/klauspost/compress v1.18.0 (#6565)

Added

General

  • New server option no_fast_producer_stall allows disabling the stall gates, instead preferring to drop messages to consumers that would have resulted in a stall instead (#6500)
  • New server option first_info_timeout to control how long a leafnode connection should wait for the initial connection info, useful for high latency links (#5424)

Monitoring

  • The gatewayz monitoring endpoint can now return subscription information (#6525)

Improved

General

  • The configured write deadline is now applied to only the current batch of write vectors (with a maximum of 64MB), making it easier to configure and reason about (#6471)
  • Publishing through a service import to an account with no interest will now generate a "no responders" error instead of silently dropping the message (#6532)
  • Adjust the stall gate for producers to be less penalizing (#6568, #6579)

JetStream

  • Consumer signaling from streams has been optimized, taking consumer filters into account, significantly reducing CPU usage and overheads when there are a large number of consumers with sparse or non-overlapping interest (#6499)
  • Num pending with multiple filters, enforcing per-subject limits and loading the per-subject info now use a faster subject tree lookup with fewer allocations (#6458)
  • Optimizations for calculating num pending etc. by handling literal subjects using a faster path (#6446)
  • Optimizations for loading the next message with multiple filters by avoiding linear scans in message blocks in some cases, particularly where there are lots of deletes or a small number of subjects (#6448)
  • Avoid unnecessary system time calls when ranging a large number of interior deletes, reducing CPU time (#6450)
  • Removed unnecessary locking around finding out if Raft groups are leaderless, reducing contention (#6438)
  • Improved the error message when trying to change the consumer type (#6408)
  • Improved the error messages returned by healthz to be more descriptive about why the healthcheck failed (#6416)
  • The limit of concurrent disk I/O operations that JetStream can perform simultaneously has been raised (#6449)
  • Reduced the number of allocations needed for handling client info headers around the JetStream API and service imports/exports (#6453)
  • Calculating the starting sequence for a source consumer has been optimized for streams where there are many interior deletes (#6461)
  • Messages used for cluster replication are now correctly accounted for in the statistics of the origin account (#6474)
  • Reduce the amount of time taken for cluster nodes to start campaigning in some cases (#6511)
  • Reduce memory allocations when writing new messages to the filestore write-through cache (#6576)

Monitoring

  • The routez endpoint now reports pending_bytes (#6476)

Fixed

General

  • The max_closed_clients option is now parsed correctly from the server configuration file (#6497)

JetStream

  • A bug in the subject state tracking that could result in in consumers skipping messages on interest or WQ streams has been fixed (#6526)
  • A data race between the stream config and looking up streams has been fixed (#6424) Thanks to @evankanderson!
  • Fixed an issue where Raft proposals were incorrectly dropped after a peer remove operation, which could result in a stream desync (#6456)
  • Stream disk reservations will no longer be counted multiple times after stream reset errors have occurred (#6457)
  • Fixed an issue where a stream could desync if the server exited during a catchup (#6459)
  • Fixed a deadlock that could occur when cleaning up large numbers of consumers that have reached their inactivity threshold (#6460)
  • A bug which could result in stuck consumers after a leader change has been fixed (#6469)
  • Fixed an issue where it was not possible to update a stream or consumer if up against the max streams or max consumers limit (#6477)
  • The preferred stream leader will no longer respond if it has not completed setting up the Raft node yet, fixing some API timeouts on stream info and other API calls shortly after the stream is created (#6480)
  • Auth callouts can now correctly authenticate the username and password or authorization token from a leafnode connection (#6492)
  • Stream ingest from an imported subject will now continue to work correctly after an update to imports/exports via a JWT update (#6498)
  • Parallel stream creation requests for the same stream will no longer incorrectly return a limits error when max streams is configured (#6502)
  • Consumers created or recreated while a cluster node was down are now handled correctly after a snapshot when the node comes back online (#6507)
  • Invalidate entries in the pending append entry cache correctly, reducing the chance of an incorrect apply (#6513)
  • When compacting or truncating streams or logs, correctly clean up the delete map, fixing potential memory leaks and the potential for index.db to not be recovered correctly after a restart (#6515)
  • Retry removals from acks if they have been missed due to the consumer ack floor being ahead of the stream applies, correcting a potential stream drift across replicas (#6519)
  • When recovering from block files, do not put deleted messages below the first sequence into the delete map (#6521)
  • Preserve max delivered messages with interest retention policy using the redelivered state, such that a new consumer will not unexpectedly remove the message (#6575)

Leafnodes

  • Do not incorrectly send duplicate messages when a queue group has members across different leafnodes when connected through a gateway (#6517)

WebSockets

  • Fixed a couple cases where memory may not be reclaimed from Flate compressors correctly after a WebSocket client disconnect or error scenario (#6451)

Tests

Complete Changes

v2.10.25...v2.10.26

Don't miss a new nats-server release

NewReleases is sending notifications on new releases.