RabbitMQ 3.13.0
RabbitMQ 3.13.0
is a new feature release.
Highlights
This release includes several new features, optimizations, internal changes in preparation for RabbitMQ 4.x,
and an updated documentation website.
The user-facing areas that have seen the biggest improvements in this release are
- Khepri now can be evaluated as an alternative schema data store in RabbitMQ, replacing Mnesia.
- NB: Khepri is currently an experimental feature and should not yet be used for production.
- MQTTv5 support
- Support for server-side stream filtering
- A new common message container format used internally, based on the AMQP 1.0 message container format
- Improved classic non-mirrored queue performance with message sizes larger than
4 KiB (or a different customized CQ index embedding threshold) - Classic queues storage implementation version 2 (CQv2) is now highly recommended for all new deployments.
CQv2 meaningfully improves performance of non-mirrored classic queues for most workloads
See Compatibility Notes below to learn about breaking or potentially breaking changes in this release.
Release Artifacts
RabbitMQ releases are distributed via GitHub.
Debian and RPM packages are available via Cloudsmith mirrors.
Community Docker image, Chocolatey package, and the Homebrew formula
are other installation options. They are updated with a delay.
Erlang/OTP Compatibility Notes
This release requires Erlang 26.x.
Provisioning Latest Erlang Releases explains
what package repositories and tools can be used to provision latest patch versions of Erlang 26.x.
Upgrading to 3.13
Documentation guides on upgrades
See the Upgrading guide for documentation on upgrades and RabbitMQ change log
for release notes of other releases.
Note that since 3.12.0 requires all feature flags to be enabled before upgrading,
there is no upgrade path from from 3.11.24 (or a later patch release) straight to 3.13.0.
Required Feature Flags
This release does not graduate any feature flags.
However, all users are highly encouraged to enable all feature flags before upgrading to this release from
3.12.x.
Mixed version cluster compatibility
RabbitMQ 3.13.0 nodes can run alongside 3.12.x
nodes. 3.13.x
-specific features can only be made available when all nodes in the cluster
upgrade to 3.13.0 or a later patch release in the new series.
While operating in mixed version mode, some aspects of the system may not behave as expected. The list of known behavior changes is covered below.
Once all nodes are upgraded to 3.13.0, these irregularities will go away.
Mixed version clusters are a mechanism that allows rolling upgrade and are not meant to be run for extended
periods of time (no more than a few hours).
Recommended Post-upgrade Procedures
Switch Classic Queues to CQv2
We recommend switching classic queues to CQv2 after all cluster nodes have been upgrades,
at first using policies, and then eventually using a setting in rabbitmq.conf
. Upgrading
classic queues to CQv2 at boot time using the configuration file setting can be
potentially unsafe in environments where deprecated classic mirrored queues still exist.
For new clusters, adopting CQv2 from the start is highly recommended:
# CQv2 should be used by default for all new clusters
classic_queue.default_version = 2
Compatibility Notes
This release includes a few potentially breaking changes.
Minimum Supported Erlang Version
Starting with this release, RabbitMQ requires Erlang 26.x. Nodes will fail to start
on older Erlang releases.
Client Library Compatibility
Client libraries that were compatible with RabbitMQ 3.11.x
and 3.12.x
will be compatible with 3.13.0
.
RabbitMQ Stream Protocol clients must be upgraded to their latest versions in order to support
the stream filtering feature introduced in this release.
Consistency Model and Schema Modification Visibility Guarantees of Khepri and Mnesia
Khepri has an important difference from Mnesia when it comes to schema modifications such as queue
or stream declarations, or binding declarations. These changes won't be noticeable with many workloads
but can affect some, in particular, certain integration tests.
Consider two scenarios, A and B.
Scenario A
There is only one client. The client performs the following steps:
- It declares a queue Q
- It binds Q to an exchange X
- It publishes a message M to the exchange X
- It expects the message to be routed to queue Q
- It consumes the message
In this scenario, there should be no observable difference in behavior. Client's expectations
will be met.
Scenario B
There are two clients, One and Two, connected to nodes R1 and R3, and using the same virtual host.
Node R2 has no client connections.
Client One performs the following steps:
- It declares a queue Q
- It binds Q to an exchange X
- It gets a queue declaration confirmation back
- It notifies client 2 or client 2 implicitly finds out that it has finished the steps above (for example, in an integration test)
- Client Two publishes a message M to X
- Clients One and Two expect the message to be routed to Q
In this scenario, on step three Mnesia would return when all cluster nodes have committed an update.
Khepri, however, will return when a majority of nodes, including the node handling Client One's operations,
have returned.
This may include nodes R1 and R2 but not node R3, meaning that message M published by Client Two connected to node R3
in the above example is not guaranteed not be routed.
Once all schema changes propagate to node R3, Client Two's subsequent
publishes on node R3 will be guaranteed to be routed.
This trade-off of a Raft-based system that assume that a write accepted by a majority of nodes
can be considered a succeess.
Workaround Strategies
To satisfy Client Two's expectations in scenario B Khepri could perform consistent (involving a majority of replicas)
queries of bindings when routing messages but that would have a significant impact on throughput
of certain protocols (such as MQTT) and exchange/destination types (anything that resembles a topic exchange in AMQP 0-9-1).
Applications that rely on multiple connections that depend on a shared topology have
several coping strategies.
If an application uses two or more connections to different nodes, it can
declare its topology on boot and then injecting a short pause (1-2 seconds) before proceeding with
other operations.
Applications that rely on dynamic topologies can switch to use a "static" set of
exchanges and bindings.
Application components that do not need to use a shared topology can each configure
its own queues/streams/bindings.
Test suites that use multiple connections to different nodes can choose to use just one connection or
connect to the same node, or inject a pause, or await a certain condition that indicates that the topology
is in place.
Management Plugin and HTTP API
GET /api/queues` HTTP API endpoint has dropped several rarely used metrics, resulting in up to 25% in traffic saving.
MQTT Plugin
mqtt.subscription_ttl
(in milliseconds) configuration setting was replaced with mqtt.max_session_expiry_interval_seconds
(in seconds).
A 3.13 RabbitMQ node will fail to boot if the old configuration setting is set.
For example, if you set mqtt.subscription_ttl = 3600000
(1 hour) prior to 3.13, replace that setting with mqtt.max_session_expiry_interval_seconds = 3600
(1 hour) in 3.13.
rabbitmqctl node_health_check is Now a No-Op
rabbitmqctl node_health_check
has been deprecated for over three years
and is now an no-op (does nothing).
See the Health Checks section in the monitoring guide
to find out what modern alternatives are available.
openSUSE Leap Package is not Provided
An openSUSE Leap package will not be provided with this release of RabbitMQ.
This release requires Erlang 26 and there is an Erlang 26 package available from Erlang Factory
but the package depends on glibc
2.34, and all currently available openSUSE Leap releases
(up to 15.5) ship with 2.31 at most.
Team RabbitMQ would like to continue building a openSUSE Leap package when a Leap 15.5-compatible Erlang 26
package becomes publicly available.
Getting Help
Any questions about this release, upgrades or RabbitMQ in general are welcome in GitHub Discussions or
on our community Discord.
Changes Worth Mentioning
Release notes are kept under rabbitmq-server/release-notes.
The Website
This release, 3.13.0, introduces a major update to the RabbitMQ website.
The new site includes:
- Versioning: over time, it will be able to provide documentation on a few most recent release seris, not just the latest one
- A reworked table of contents and navigation
- Search
The update will go live around the 3.13.0 GA release date.
Core Server
Enhancements
-
Khepri now can be used as an alternative schema data store
in RabbitMQ, by enabling a feature flag:rabbitmqctl enable_feature_flag khepri_db
In practical terms this means that it will be possible to swap Mnesia for a Raft-based data store
that will predictably recover from network partitions and node failures, the same way quorum queues
and streams already do. At the same time, this means
that RabbitMQ clusters now must have a majority of nodes online at all times, or all client operations will be refused.Like quorum queues and streams, Khepri uses RabbitMQ's Raft implementation under the hood. With Khepri enabled, all key modern features
of RabbitMQ will use the same fundamental approach to recovery from failures, relying on a library that passes a Jepsen test suite.Team RabbitMQ intends to make Khepri the default schema database starting with RabbitMQ 4.0.
GitHub issue: #7206
-
Messages are now internally stored using a new common heavily AMQP 1.0-influenced container format. This is a major step towards a protocol-agnostic core:
a common format that encapsulates a sum of data types used by the protocols RabbitMQ supports, plus annotations for routng, dead-lettering state,
and other purposes.AMQP 1.0, AMQP 0-9-1, MQTT and STOMP have or will adopt this internal representation in upcoming releases. RabbitMQ Stream protocol already uses the AMQP 1.0 message container
structure internally.This common internal format will allow for more correct and potentially efficient multi-protocol support in RabbitMQ,
and that most cross-protocol translation rough edges can be smoothened.GitHub issue: #5077
-
Target quorum queue replica state is now continuously reconciled.
When the number of online replicas of a quorum queue goes below (or above) its target,
new replicas will be automatically placed if enough cluster nodes are available.
This is a more automatic version of how quorum queue replicas have originally been grown.For automatic shrinking of queue replicas, the user must opt in.
Contributed by @SimonUnge (AWS).
GitHub issue: #8218
-
Revisited peer discovery implementation that further reduces the probability of two or more
sets of nodes forming separate clusters when all cluster nodes are created at the same time and boot in parallel.GitHub issue: #9797
-
Classic queue storage v2 (CQv2) has matured and is now recommended for all users.
We recommend switching classic queues to CQv2 after all cluster nodes have been upgraded to 3.13.0,
at first using policies, and then eventually using a setting inrabbitmq.conf
. Upgrading
classic queues to CQv2 at boot time using the configuration file setting can be
potentially unsafe in environments where deprecated classic mirrored queues still exist.For new clusters, adopt CQv2 from the start by setting
classic_queue.default_version
inrabbitmq.conf
:# only set this value for new clusters or after all nodes have been upgraded to 3.13 classic_queue.default_version = 2
-
Non-mirrored classic queues: optimizations of storage for larger (greater than 4 kiB) messages.
-
When a non-mirrored classic queue is declared, its placement node
is now selected with less interaction with cluster peers, speeding up the process
when some nodes have recently gone down.GitHub issue: #10102
-
A subsystem for marking features as deprecated.
GitHub issue: #7390
-
Plugins now can register custom queue types. This means that a plugin now can provide
a custom queue type.Contributed by @luos (Erlang Solutions).
-
Classic queue storage version now can be set via operator policies.
Contributed by @SimonUnge (AWS).
GitHub issue: #9541
-
channel_max_per_node
allows limiting how many channels a node would allow clients to open, in total.This limit is easier to reason about than the per-node connection limit multiplied by
channel_max
(a per-connection limit for channels).Contributed by @SimonUnge (AWS).
GitHub issue: #10351
-
disk_free_limit.absolute
and vm_memory_high_watermark.absolutenow support more information units:
Mi,
Gi,
TB,
Ti,
PB,
Pi`.In addition, there were some renaming of the existing keys:
K
now means "kilobyte" and not "kibibyte"M
now means "megabyte" and not "mebibyte"G
now means "gigabyte" and not "gibibyte"
There is no consensus on how these single letter suffixes should be interpreted (as their power of 2 or power of 10 variants),
so RabbitMQ has adopted a widely used convention adopted by Kubernetes. -
Improved efficiency of definition imports in environments with a lot of virtual hosts.
Contributed by @ayandad.
GitHub issue: #10320
-
When a consumer delivery timeout hits, a more informative message is now logged.
Contributed by @rluvaton.
GitHub issue: #10446
Bug Fixes
This release includes all bug fixes shipped in the 3.12.x
series.
-
Feature flag discovery on a newly added node could discover an incomplete inventory of feature flags.
GitHub issue: #8477
-
Feature flag discovery operations will now be retried multiple times in case of network failures.
GitHub issue: #8491
-
Feature flag state is now persisted in a safer way during node shutdown.
GitHub issue: #10279
-
Feature flag synchronization between nodes now avoids a potential race condition.
GitHub issue: #10027
-
The state of node maintenance status across the cluster is now replicated. It previously was accessible
to all nodes but not replicated.GitHub issue: #9005
CLI Tools
Deprecations
-
rabbitmqctl rename_cluster_node
andrabbitmqctl update_cluster_nodes
are now no-ops.They were not safe to use with quorum queues and streams, and are completely incompatible with Khepri.
GitHub issue: #10369
Enhancements
-
rabbitmq-diagnostics cluster_status
now responds significantly faster when some cluster
nodes are not reachable.GitHub issue: #10101
-
rabbitmqctl list_deprecated_features
is a new command that lists deprecated features
that are deprecated used on the target node.
Management Plugin
Enhancements
-
New API endpoint,
GET /api/stream/{vhost}/{name}/tracking
, can be used to track
publisher and consumer offsets in a stream.GitHub issue: #9642
-
Several rarely used queue metrics were removed to reduce inter-node data transfers
and CPU burn during API response generation. The effects will be particularly pronounced
for theGET /api/queues
endpoint used without filtering or pagination, which can produce
enormously large responses.A couple of relevant queue metrics or state fields were lifted to the top level.
This is a potentially breaking change.
Note that Prometheus is the recommended option for monitoring,
not the management plugin's HTTP API.
Bug Fixes
-
GET /api/nodes/{name}
failed with a 500 when called with a non-existed node name.GitHub issue: #10330
Stream Plugin
Enhancements
-
Support for (consumer) stream filtering.
This allows consumers that are only interested in a subset of data in a stream to receive
less data. Note that false positives are possible, so this feature should be accompanied by
client library or application-level filtering.GitHub issue: #8207
-
Stream connections now support JWT (OAuth 2) token renewal. The renewal is client-initiated
shortly before token expiry. Therefore, this feature requires stream protocol clients to be updated.GitHub issue: #9187
-
Stream connections are now aware of JWT (OAuth 2) token expiry.
GitHub issue: #10292
Bug Fixes
-
Stream (replica) membership changes safety improvements.
GitHub issue: #10331
-
Stream protocol connections now complies with the connection limit in target virtual host.
GitHub issue: #9946
MQTT Plugin
Enhancements
-
Support for MQTTv5 (with limitations).
-
MQTT clients that use QoS 0 now can reconnect more reliably when the node they were connected to fails.
GitHub issue: #10203
-
Negative message acknowledgements are now propagated to MQTTv5 clients.
GitHub issue: #9034
-
Potential incompatibility:
mqtt.subscription_ttl
configuration was replaced with
mqtt.max_session_expiry_interval_seconds
that targets MQTTv5.GitHub issue: #8846
AMQP 1.0 Plugin
Bug Fixes
-
During AMQP 1.0 to AMQP 0-9-1 conversion, the Correlation ID message property is now stored as
x-correlation-id
(instead ofx-correlation
) for values longer than 255 bytes.This is a potentially breaking change.
GitHub issue: #8680
-
AMQP 1.0 connections are now throttled when the node hits a resource alarm.
GitHub issue: #9953
OAuth 2 AuthN and AuthZ Backend Plugin
Enhancements
-
RabbitMQ nodes now allow for multiple OAuth 2 resources to be configured. Each resource can use
a different identity provider (for example, one can be powered by Keycloak and another by Azure Active Directory).This allows for identity provider infrastructure changes (say, provider A is replaced with provider B over time)
that do not affect RabbitMQ's ability to authenticate clients and authorize the operations they attempt to perform.GitHub issue: #10012
-
The plugin now performs discovery of certain properties for OpenID-compliant identity providers.
GitHub issue: #10012
Peer Discovery AWS Plugin
Enhancements
-
It is now possible to override how node's hostname is extracted from AWS API responses during peer discovery.
This is done using
cluster_formation.aws.hostname_path
, a collection of keys that will be used to
traverse the response and extract nested values from it. The list is comma-separated.The default value is a single value list,
privateDnsName
.Contributed by @illotum (AWS).
GitHub issue: #10097
Dependency Changes
Source Code Archives
To obtain source code of the entire distribution, please download the archive named rabbitmq-server-3.13.0.tar.xz
instead of the source tarball produced by GitHub.