This is release v1.11.0-rc.0
of Grafana Alloy.
Upgrading
Read the release notes for specific instructions on upgrading from older versions:
Notable changes:
Breaking changes
-
Prometheus dependency had a major version upgrade from v2.55.1 to v3.4.2. (@thampiotr)
-
The
.
pattern in regular expressions in PromQL matches newline characters now. With this change a regular expressions like.*
matches strings that include\n
. This applies to matchers in queries and relabel configs in Prometheus and Loki components. -
The
enable_http2
inprometheus.remote_write
component's endpoints has been changed tofalse
by default. Previously, in Prometheus v2 the remote write http client would default to use http2. In order to parallelize multiple remote write queues across multiple sockets its preferable to not default to http2. If you prefer to use http2 for remote write you must now setenable_http2
totrue
in yourprometheus.remote_write
endpoints configuration section. -
The experimental CLI flag
--feature.prometheus.metric-validation-scheme
has been deprecated and has no effect. You can configure the metric validation scheme individually for eachprometheus.scrape
component. -
Log message format has changed for some of the
prometheus.*
components as part of the upgrade to Prometheus v3. -
The values of the
le
label of classic histograms and thequantile
label of summaries are now normalized upon ingestion. In previous Alloy versions, that used Prometheus v2, the value of these labels depended on the scrape protocol (protobuf vs text format) in some situations. This led to label values changing based on the scrape protocol. E.g. a metric exposed asmy_classic_hist{le="1"}
would be ingested asmy_classic_hist{le="1"}
via the text format, but asmy_classic_hist{le="1.0"}
via protobuf. This changed the identity of the metric and caused problems when querying the metric. In current Alloy release, which uses Prometheus v3, these label values will always be normalized to a float like representation. I.e. the above example will always result inmy_classic_hist{le="1.0"}
being ingested into Prometheus, no matter via which protocol. The effect of this change is that alerts, recording rules and dashboards that directly reference label values as whole numbers such asle="1"
will stop working.The recommended way to deal with this change is to fix references to integer
le
andquantile
label values, but otherwise do nothing and accept that some queries that span the transition time will produce inaccurate or unexpected results.
See the upstream Prometheus v3 migration guide for more details.
-
-
prometheus.exporter.windows
dependency has been updated to v0.31.1. (@dehaansa)- There are various renamed metrics and two removed collectors (
cs
,logon
).
- There are various renamed metrics and two removed collectors (
-
Add
otel_attrs_to_hec_metadata
configuration block tootelcol.exporter.splunkhec
to matchotelcol.receiver.splunkhec
. (@cgetzen) -
[
otelcol.processor.batch
] Two arguments have different default values. (@ptodev)send_batch_size
is now set to 2000 by default. It used to be 8192.send_batch_max_size
is now set to 3000 by default. It used to be 0.- This helps prevent issues with ingestion of batches that are too large.
-
OpenTelemetry Collector dependencies upgraded from v0.128.0 to v0.134.0. (@ptodev)
- The
otelcol.receiver.opencensus
component has been deprecated and will be removed in a future release, useotelcol.receiver.otelp
instead. - [
otelcol.exporter.*
] The deprecatedblocking
argument in thesending_queue
block has been removed.
Useblock_on_overflow
instead. - [
otelcol.receiver.kafka
,otelcol.exporter.kafka
]: Removed thebroker_addr
argument from theaws_msk
block.
Also removed theSASL/AWS_MSK_IAM
authentication mechanism. - [
otelcol.exporter.splunkhec
] Thebatcher
block is deprecated and will be removed in a future release. Use thequeue
block instead. - [
otelcol.exporter.loadbalancing
] Use a linear probe to decrease variance caused by hash collisions, which was causing a non-uniform distribution of loadbalancing. - [
otelcol.connector.servicegraph
] Thedatabase_name_attribute
argument has been removed. - [
otelcol.connector.spanmetrics
] Adds a default maximum number of exemplars within the metric export interval. - [
otelcol.processor.tail_sampling
] Add a newblock_on_overflow
config attribute.
- The
Features
-
Add the
otelcol.receiver.fluentforward
receiver to receive logs via Fluent Forward Protocol. (@rucciva) -
Add the
prometheus.enrich
component to enrich metrics using labels fromdiscovery.*
components. (@ArkovKonstantin) -
Add
node_filter
configuration block toloki.source.podlogs
component to enable node-based filtering for pod discovery. When enabled, only pods running on the specified node will be discovered and monitored, significantly reducing API server load and network traffic in DaemonSet deployments. (@QuentinBisson) -
(Experimental) Additions to experimental
database_observability.mysql
component:query_sample
collector now supports auto-enabling the necessarysetup_consumers
settings (@cristiangreco)query_sample
collector is now compatible with mysql less than 8.0.28 (@cristiangreco)- include
server_id
label on log entries (@matthewnolf) - support receiving targets argument and relabel those to include
server_id
(@matthewnolf) - updated the config blocks and documentation (@cristiangreco)
-
(Experimental) Additions to experimental
database_observability.postgres
component:- add
query_tables
collector for postgres (@matthewnolf) - add
cloud_provider.aws
configuration that enables optionally supplying the ARN of the database under observation. The ARN is appended to metric samples as labels for easier filtering and grouping of resources. - add
query_sample
collector for postgres (@gaantunes) - add
schema_table
collector for postgres (@fridgepoet) - include
server_id
label on logs and metrics (@matthewnolf)
- add
-
Add
otelcol.receiver.googlecloudpubsub
community component to receive metrics, traces, and logs from Google Cloud Pub/Sub subscription. (@eraac) -
(Experimental) Add a
honor_metadata
configuration argument to theprometheus.scrape
component.
When set totrue
, it will propagate metric metadata to downstream components. -
Add a flag to pyroscope.ebpf alloy configuration to set the off-cpu profiling threshold. (@luweglarz)
-
Add
encoding.url_encode
andencoding.url_decode
std lib functions. (@kalleep)
For a full list of changes, please refer to the CHANGELOG!
Installation
Refer to our installation guide for how to install Grafana Alloy.