The OpenTelemetry Collector Contrib contains everything in the opentelemetry-collector release, be sure to check the release notes there as well.
End User Changelog
🛑 Breaking changes 🛑
exporter/elasticsearch: Removehost.os.typeencoding in ECS mode (#46900)
Use processor/elasticapmprocessor v0.36.2 or later forhost.os.typeenrichmentreceiver/prometheus: Remove the deprecatedreport_extra_scrape_metricsreceiver configuration option and obsolete extra scrape metric feature gates. (#44181)
report_extra_scrape_metricsis no longer accepted inprometheusreceiverconfiguration.
Control extra scrape metrics through the PromConfig.ScrapeConfigs.ExtraScrapeMetrics setting instead.
🚩 Deprecations 🚩
-
receiver/awsfirehose: Deprecate built-in unmarshalers (cwlogs, cwmetrics, otlp_v1) in favor of encoding extensions. (#45830)
Use the aws_logs_encoding extension (format: cloudwatch) instead of cwlogs,
and the awscloudwatchmetricstreams_encoding extension instead of cwmetrics (format: json)
or otlp_v1 (format: opentelemetry1.0). -
receiver/file_log: Renamefilelogreceiver tofile_logwith deprecated aliasfilelog(#45339) -
receiver/kafka: Deprecate the built-inazure_resource_logsencoding in favour ofazureencodingextension. (#46267)
The built-inazure_resource_logsencoding does not support all timestamp formats
emitted by Azure services (e.g. US-format timestamps from Azure Functions).
Users should migrate to theazureencodingextension,
which provides full control over time formats and is actively maintained.
💡 Enhancements 💡
-
cmd/opampsupervisor: Add configuration validation before applying remote config to prevent collector downtime (#41068)
Validates collector configurations before applying them, preventing downtime from invalid remote configs.
Disabled by default. Enable viaagent.validate_config: true. May produce false positives when resources
like ports are temporarily unavailable during validation. -
connector/datadog: Document datadog connector is not supported in aix environments (#47010)
Explicitly opt out of host metadata computation in datadog components to support AIX compilation target. -
connector/signal_to_metrics: Addkeys_expressionsupport ininclude_resource_attributesandattributesfor dynamic attribute key resolution at runtime (#46884)
Thekeys_expressionfield allows specifying an OTTL value expression that resolves to a list
of attribute keys at runtime. This enables dynamic resource attribute filtering based on runtime
data such as client metadata. Exactly one ofkeyorkeys_expressionmust be set per entry. -
connector/signal_to_metrics: Reduce per-signal allocations in the hot path by replacing attribute map allocation with a pooled hash-based ID check, and caching filtered resource attributes per metric definition within each resource batch. (#47197) -
connector/signal_to_metrics: Pre-compute prefixed collector key to avoid a string allocation on every signal processed. (#47183)
Pre-computing the collector key avoids a string allocation on every signal processed. -
exporter/datadog: Document datadog exporter is not supported in aix environments (#47010)
Explicitly opt out of host metadata computation in datadog components to support AIX compilation target. -
exporter/elasticsearch: Addhistogram:rawmapping hint to bypass midpoint approximation for histogram metrics (#47150) -
exporter/elasticsearch: Cache metric attribute set per bulk session instead of recomputing it for every document (#47170)
syncBulkIndexerSession.Add()was callinggetAttributesFromMetadataKeys+
attribute.NewSet+metric.WithAttributeSeton every document in the hot path. The attribute set is
derived from the request context metadata, which is constant for the lifetime of a session, so it is
now computed once inStartSessionand reused across allAdd()calls in that session. -
exporter/elasticsearch: populate _doc_count field in ECS mapping mode (#46936)
_doc_count is a special metadata field in Elasticsearch used when a document represents pre-aggregated data (like histograms or aggregate metrics).
Currently, elasticsearchexporter only populates this field for otel mapping mode (native otel field structure). This change
adds support for ECS mapping mode (native ECS field structure) so that we have consistent behavior for both mapping modes. -
exporter/elasticsearch: Encoderequire_data_streamin Elasticsearch bulk action metadata instead of the bulk request query string. (#46970)
This preserves existing endpoint query parameters while movingrequire_data_stream
to the per-document action line expected by newer bulk workflows. Benchmarks show
a stable ~27 bytes/item NDJSON payload overhead before compression. -
exporter/elasticsearch: Improve performance of Elasticsearch exporter document serialisation (#47171) -
exporter/elasticsearch: Add metric for docs retried because of request errors (#46215) -
exporter/kafka: Cache OTel metric attribute sets in OnBrokerE2E hook to reduce per-export allocations (#47186)
OnBrokerE2Epreviously rebuiltattribute.NewSet+metric.WithAttributeSeton every
call. The set of distinct (nodeID, host, outcome) combinations is bounded by
2 × number-of-brokers, so the computedMeasurementOptionis now cached per key. -
exporter/pulsar: This component does not support aix/ppc64. (#47010)
Make the exporter explicitly panic if used in aix/ppc64 environments. -
extension/datadog: Document datadog extension is not supported in aix environments (#47010)
Explicitly opt out of host metadata computation in datadog components to support AIX compilation target. -
extension/db_storage: Make dbstorage work in AIX environments (#47010)
sqlite support is offered via modernC, which doesn't support the AIX ppc64 compilation target.
We carve out support for sqlite in AIX environments so contrib can compile for this compilation target. -
extension/health_check: Add component event attributes to serialized output. (#43606)
Whenhttp.status.include_attributesis enabled in the healthcheckv2 extension (withuse_v2: true),
users will see additional attributes in the status output. These attributes provide more
context about component states, including details like error messages and affected components.
For example:{ "healthy": false, "status": "error", "attributes": { "error_msg": "not enough permissions to read cpu data", "scrapers": ["cpu", "memory", "network"] } } -
extension/healthcheckv2: Add component event attributes to serialized output. (#43606)
Whenhttp.status.include_attributesis enabled in the healthcheckv2 extension (withuse_v2: true),
users will see additional attributes in the status output. These attributes provide more
context about component states, including details like error messages and affected components.
For example:{ "healthy": false, "status": "error", "attributes": { "error_msg": "not enough permissions to read cpu data", "scrapers": ["cpu", "memory", "network"] } } -
extension/sigv4auth: Add support for External IDs when assuming roles in cross-account authentication scenarios (#44930)
Added external_id field to the AssumeRole configuration, allowing users to specify
an External ID when assuming roles for enhanced cross-account security. -
internal/datadog: Do not compute host metadata in AIX environments (#47010)
Explicitly opt out of host metadata computation in datadog components to support AIX compilation target. -
pkg/stanza: Ensure router operator does not split batches of entries (#42393) -
pkg/stanza: Parse all Windows Event XML fields into the log body, including RenderingInfo (with Culture, Channel, Provider, Task, Opcode, Keywords, Message), UserData, ProcessingErrorData, DebugData, and BinaryEventData. (#46943)
Previously, RenderingInfo was only used to derive the top-level level/task/opcode/keywords/message
fields. It is now also emitted as a top-levelrendering_infokey containing all fields including
culture,channel, andprovider. UserData (an alternative to EventData used by some providers)
is now parsed into auser_datakey. Rare schema elements ProcessingErrorData, DebugData, and
BinaryEventData are also captured when present. -
processor/resourcedetection: Added IBM Cloud VPC resource detector to the Resource Detection Processor (#46874) -
processor/resourcedetection: Added IBM Cloud Classic resource detector to the Resource Detection Processor (#46874) -
processor/tail_sampling: Addsampling_strategyconfig withtrace-completeandspan-ingestmodes for tail sampling decision timing and evaluation behavior. (#46600) -
receiver/awslambda: Enrich context with AWS Lambda receiver metadata for S3 logs (#47046) -
receiver/azure_event_hub: Add support for Azure Event Hubs distributed processing. This allows the receiver to automatically coordinate partition ownership and checkpointing across multiple collector instances via Azure Blob Storage. (#46595) -
receiver/docker_stats: Add TLS configuration support for connecting to the Docker daemon over HTTPS with client and server certificates. (#33557)
A new optionaltlsconfiguration block is available indocker_statsreceiver config (and the
sharedinternal/dockerpackage). When omitted the connection remains insecure (plain HTTP or
Unix socket), preserving existing behavior. When provided it supports the standard
configtls.ClientConfigfields:ca_file,cert_file,key_file,insecure_skip_verify,
min_version, andmax_version.
A warning is now emitted when a plaintcp://orhttp://endpoint is used without TLS,
reflecting Docker's deprecation of unauthenticated TCP connections since Docker v26.0
(see https://docs.docker.com/engine/deprecated/#unauthenticated-tcp-connections). -
receiver/docker_stats: Add "stream_stats" config option to maintain a persistent Docker stats stream per container instead of opening a new connection on every scrape cycle. (#46493)
Whenstream_stats: trueis set, each container maintains a persistent open Docker stats
stream instead of opening and closing a new connection on every scrape cycle. The scraper
reads from the cached latest value, which reduces connection overhead. -
receiver/expvar: Enable the re-aggregation feature for the expvar receiver (#45396) -
receiver/file_log: Addmax_log_size_behaviorconfig option to control oversized log entry behavior (#44371)
The newmax_log_size_behaviorsetting controls what happens when a log entry exceedsmax_log_size.split(default): Splits oversized log entries into multiple log entries. This is the existing behavior.truncate: Truncates oversized log entries and drops the remainder, emitting only a single truncated log entry.
-
receiver/hostmetrics: Enable re-aggregation for system scraper (#46624)
Enabled the reaggregation feature gate for the system scraper. -
receiver/hostmetrics: Enable re-aggregation for process scraper (#46623)
Enabled the reaggregation feature gate for the process scraper and set all metric attributes (context_switch_type, direction, paging_fault_type, state) with requirement_level recommended. -
receiver/mongodb: Enable re-aggregation feature for mongodb receiver metrics (#46366) -
receiver/mongodb: Addschemeconfiguration option to supportmongodb+srvconnections (#36011)
The newschemefield allows connecting to MongoDB clusters using
SRV DNS records (mongodb+srv protocol). Defaults to "mongodb" for
backward compatibility. -
receiver/mysql: Addmysql.query_plan.hashattribute to top query log records, enabling users to correlate top queries with their corresponding execution plans. (#46626) -
receiver/mysql: Addedmysql.session.statusandmysql.session.idattributes to query samples.mysql.session.statusindicates the session status (waiting,running, orother) at the time of the sample.mysql.session.idprovides the unique session identifier. Both attributes provide additional context for understanding query performance and behavior. (#135350) -
receiver/mysql: Add and tune obfuscation of sensitive properties in both V1 and V2 JSON query plans. (#46629, #46587)
Configure and test obfuscation for V1 and V2 plans, including tests of queries retrieved from the performance schema that are truncated and cannot be obfuscated.
The importance of obfuscation can be very context dependent; sensitive PII, banking, and authorization data may reside in the same database as less sensitive data, and it can be vital to ensure that what is expected to be obfuscated is always obfuscated. Significant additional testing has been added around query plan obfuscation to ensure that this is enforced and to provide assurance and reference to users about what specifically is obfuscated and what is not. -
receiver/mysql: Propagates W3C TraceContext from MySQL session variables to query sample log records. When a MySQL session sets@traceparent, the receiver extracts the TraceID and SpanID and stamps them onto the correspondingdb.server.query_samplelog record, enabling correlation between application traces and query samples. (#46631)
Only samples from sessions where@traceparentis set will have non-zerotraceIdandspanIdfields on the log record. -
receiver/prometheus: Add support for reading instrumentation scope attributes fromotel_scope_<attribute-name>labels while feature-gating deprecation ofotel_scope_info. (#41502)
Scope attributes are always extracted fromotel_scope_<attribute-name>labels on metrics.
Thereceiver.prometheusreceiver.IgnoreScopeInfoMetricfeature gate (alpha, disabled by default)
controls only whether the legacyotel_scope_infometric is ignored for scope attribute extraction.
When the gate is disabled, both mechanisms coexist to support migration.
See the specification change for motivation: open-telemetry/opentelemetry-specification#4505 -
receiver/pulsar: This component does not support aix/ppc64. (#47010)
Make the receiver explicitly panic if used in aix/ppc64 environments. -
receiver/skywalking: Add feature gatetranslator.skywalking.useStableSemconvto update semantic conventions from v1.18.0 to v1.38.0 (#44796)
A feature gatetranslator.skywalking.useStableSemconvhas been added to control the migration.
The gate is disabled by default (Alpha stage), so existing behavior is preserved. -
receiver/sqlquery: Add clickhouse support to sqlquery (#47116) -
receiver/sqlquery: Addrow_conditionto metric configuration for filtering result rows by column value (#45862)
Enables extracting individual metrics from pivot-style result sets where each row
represents a different metric (e.g. pgbouncer'sSHOW LISTScommand). When
row_conditionis configured on a metric, only rows where the specified column
equals the specified value are used; all other rows are silently skipped. -
receiver/sqlserver: Enable dynamic metric reaggregation in the SQL Server receiver. (#46379) -
receiver/yang_grpc: Support collecting any metric by browsing the whole metrics tree (#47054)
🧰 Bug fixes 🧰
-
exporter/kafka: Fixes the validation fortopic_from_metadata_keyto use partition keys. (#46994) -
exporter/kafka: Fix topic routing for multi-resource batches whentopic_from_attributeis set without resource-level partitioning (#46872)
Previously, when a batch contained multiple resources with different
topic attribute values, all data was silently sent to the topic of the
first resource. Each resource is now correctly routed to its own topic. -
exporter/splunk_hec: Fix timestamp precision in Splunk HEC exporter to preserve microseconds instead of truncating to milliseconds. (#47175)
Timestamps were rounded to milliseconds before sending to Splunk HEC. The rounding has been removed, giving microsecond precision in the HECtimefield. -
extension/bearertokenauth: Redact bearer token from authentication error messages to prevent credential exposure in logs. (#46200)
Previously, when a client presented an invalid bearer token, the full token value was
included in the error message returned by the Authenticate method. This error could be
propagated to log output, exposing sensitive credentials. The error message now omits
the token value entirely. -
internal/aws: Respect NO_PROXY/no_proxy environment variables when using env-based proxy configuration in awsutil (#46892)
When no explicit proxy_address was configured, the HTTP client manually read HTTPS_PROXY
and used http.ProxyURL which ignores NO_PROXY. Now delegates to http.ProxyFromEnvironment
which correctly handles all proxy environment variables. -
processor/deltatorate: Append "/s" to the unit of output datapoints to reflect the per-second rate. (#46841) -
processor/filter: Fix validation of include and exclude severity configurations so they run independently of LogConditions. (#46883) -
receiver/datadog: Propagate Datadog trace sampling priority to all spans translated from a trace chunk. (#45402) -
receiver/file_log: Fix data corruption after file compression (#46105)
After a log file is compressed (e.g. test.log → test.log.gz), the receiver configured withcompression: autowill now correctly decompress the content and continue reading from where the plaintext file left off. -
receiver/file_log: Fixes bug where File Log receiver did not read the last line of gzip compressed files. (#45572) -
receiver/hostmetrics: Align HugePages metric instrument types with the semantic conventions by emitting page_size, reserved, and surplus as non-monotonic sums instead of gauges. (#42650) -
receiver/hostmetrics: Handle nil PageFaultsStat in process scraper to prevent panic on zombie processes. (#47095) -
receiver/journald: Fix emitting of historical entries on startup (#46556)
When start_at is "end" (the default), pass --lines=0 to journalctl to suppress
the 10 historical entries it emits by default in follow mode. -
receiver/k8s_events: Exclude DELETED watch events to prevent duplicate event ingestion. (#47035) -
receiver/mysql: Remove deprecatedinformation_schema.processlistJOIN from query samples template; usethread.processlist_hostinstead. (#47041) -
receiver/oracledb: Fix oracledbreceiver aborting entire scrape when a SQL query text fails to obfuscate (e.g. due to Oracle truncating a CLOB mid-string-literal). The affected entry is now skipped with a warning log and the rest of the scrape continues normally. (#47151) -
receiver/otelarrow: Remove assumed positions of otel arrow root payload types (#46878) -
receiver/otelarrow: Fix OTLP fallback handlers returning codes.Unknown instead of codes.Unavailable for pipeline errors, causing upstream exporters to permanently drop data instead of retrying. (#46182) -
receiver/pprof: Fixes pprofreceiver file_scrapper appending resource profiles instead of merging them. (#46991) -
receiver/prometheus_remote_write: Count target_info samples in PRW response stats (#47108)
API Changelog
🛑 Breaking changes 🛑
pkg/stanza: Change signature ofadapter.NewFactoryto acceptxreceiver.FactoryOptions (#45339)
While the change is technically breaking, the existing calls toadapter.NewFactorywill continue to work.
💡 Enhancements 💡
-
pkg/expohisto: Move Go exponential histogram data structure into collector-contrib repository (#46646) -
receiver/docker_stats: Add TLS configuration support for connecting to the Docker daemon over HTTPS with client and server certificates. (#33557)
A new optionaltlsconfiguration block is available indocker_statsreceiver config (and the
sharedinternal/dockerpackage). When omitted the connection remains insecure (plain HTTP or
Unix socket), preserving existing behavior. When provided it supports the standard
configtls.ClientConfigfields:ca_file,cert_file,key_file,insecure_skip_verify,
min_version, andmax_version.
A warning is now emitted when a plaintcp://orhttp://endpoint is used without TLS,
reflecting Docker's deprecation of unauthenticated TCP connections since Docker v26.0
(see https://docs.docker.com/engine/deprecated/#unauthenticated-tcp-connections). -
receiver/docker_stats: Add "stream_stats" config option to maintain a persistent Docker stats stream per container instead of opening a new connection on every scrape cycle. (#46493)
Whenstream_stats: trueis set, each container maintains a persistent open Docker stats
stream instead of opening and closing a new connection on every scrape cycle. The scraper
reads from the cached latest value, which reduces connection overhead. -
receiver/sqlquery: Addrow_conditionto metric configuration for filtering result rows by column value (#45862)
Enables extracting individual metrics from pivot-style result sets where each row
represents a different metric (e.g. pgbouncer'sSHOW LISTScommand). When
row_conditionis configured on a metric, only rows where the specified column
equals the specified value are used; all other rows are silently skipped.
We are thrilled to welcome our first-time contributors to this project. Thank you for your contributions @ysmnababan, @Naman-B-Parlecha, @iamjaeholee, @sk-, @mjnowen, @sakke, @Dylan-M, @jijo-OO7, @shrenikjain38, @ashimrai123, @vkalintiris, @vesari, @aelnahas, @harshil562, @Skesov, @Anadee11, @NimrodAvni78, @Bingtagui404, @JakeDern, @kevinburkesegment, @splunk-shanu ! 🎉