The OpenTelemetry Collector Contrib contains everything in the opentelemetry-collector release, be sure to check the release notes there as well.
End User Changelog
ð Breaking changes ð
-
all: Removes the k8slog receiver after being unmaintained for 3 months (#46544) -
all: Remove deprecated SAPM exporter (#46555) -
all: Remove the datadogsemantics processor. (#46893)
If you need help, please contact Datadog support: https://www.datadoghq.com/support. -
exporter/google_cloud_storage:reuse_if_existsbehavior changed: now checks bucket existence instead of attempting creation (#45971)
Previously,reuse_if_exists=truewould attempt bucket creation and fall back to reusing on conflict.
Now,reuse_if_exists=truechecks if bucket exists (via storage.buckets.get) and uses it, failing if it doesn't exist.
Set to true when the service account lacks project-level bucket creation permissions but has bucket-level permissions.
reuse_if_exists=falsestill attempts to create the bucket and fails if it already exists. -
exporter/kafka: Remove deprecated top-leveltopicandencodingconfiguration fields (#46916)
The top-leveltopicandencodingfields were deprecated in v0.124.0.
Use the per-signal fields instead:logs::topic,metrics::topic,
traces::topic,profiles::topic, and the correspondingencoding
fields under each signal section. -
exporter/kafka: Remove kafka-local batching partitioner wiring and require explicitsending_queue::batch::partition::metadata_keysconfiguration as a superset ofinclude_metadata_keyswhen batching is enabled. (#46757) -
pkg/ottl:truncate_allfunction now supports UTF-8 safe truncation (#36713)
The defaulttruncate_allbehavior has changed. Truncation now respects UTF-8 character boundaries by default (new optional parameterutf8_safe, default:true), so results stay valid UTF-8 and may be slightly shorter than the limit.
To keep the previous byte-level truncation behavior (e.g. for non-UTF-8 data or to avoid any behavior change), setutf8_safetofalsein alltruncate_allusages. -
receiver/awsecscontainermetrics: Add ephemeral storage metrics and fix unit strings from Megabytes to MiB (#46414)
Adds two new task-level gauge metrics:ecs.task.ephemeral_storage.utilizedandecs.task.ephemeral_storage.reserved(in MiB).
These metrics are available on AWS Fargate Linux platform version 1.4.0+ and represent the shared ephemeral storage for the entire task.Breaking change: The unit string for
ecs.task.memory.utilized,ecs.task.memory.reserved,
container.memory.utilized, andcontainer.memory.reservedhas been corrected from"Megabytes"to"MiB".
The underlying values were already in MiB (computed via division by 1024*1024), but the unit label was incorrect.
Users relying on the exact unit string (e.g. in metric filters or dashboards) will need to update accordingly. -
receiver/mysql: Set the default collection of query_sample to false (#46902) -
receiver/postgresql: Disable default collection of top_query and query_sample events. (#46843)
This change is breaking because it disables the default collection of top_query and query_sample events. These events will need to be enabled manually if desired. -
receiver/redfish:system.host_nameandbase_urlresource attribute has been changed tohost.nameandurl.fullrespectively.
(#46236) -
receiver/windowseventlog: Change event_data from an array of single-key maps to a flat map by default, making fields directly accessible via OTTL. The previous format is available by settingevent_data_format: array. (#42565, #32952)
Named elements become direct keys (e.g., body["event_data"]["ProcessId"]).
Anonymous elements use numbered keys: param1, param2, etc.
To preserve the previous array format, set event_data_format: array in the receiver configuration.
ðĐ Deprecations ðĐ
exporter/azure_blob: Introduce new snake case compliant nameazure_blob(#46722)exporter/google_cloud_storage: Introduce new snake case compliant namegoogle_cloud_storage(#46733)extension/aws_logs_encoding: Introduce new snake case compliant nameaws_logs_encoding(#46776)extension/azure_auth: Introduce new snake case compliant nameazure_auth(#46775)extension/cgroup_runtime: Introduce new snake case compliant namecgroup_runtime(#46773)extension/google_cloud_logentry_encoding: Introduce new snake case compliant namegoogle_cloud_logentry_encoding(#46778)processor/metric_start_time: Introduce new snake case compliant namemetric_start_time(#46777)receiver/azure_blob: Introduce new snake case compliant nameazure_blob(#46721)receiver/azure_monitor: Introduce new snake case compliant nameazure_monitor(#46730)receiver/cisco_os: Introduce new snake case compliant namecisco_os(#46948)receiver/macos_unified_logging: Introduce new snake case compliant namemacos_unified_logging(#46729)receiver/prometheus_remote_write: Introduce new snake case compliant nameprometheus_remote_write(#46726)receiver/yang_grpc: Introduce new snake case compliant nameyang_grpc(#46723)
ð New components ð
receiver/azure_functions: Introduce new component to receive logs from Azure Functions (#43507)
This change includes only overall structure, readme and configuration for the new component.
ðĄ Enhancements ðĄ
-
cmd/opampsupervisor: Add configurable instance ID to Supervisor (#45596) -
connector/signal_to_metrics: Addsum.monotonicproperty for improved counter handling (#45865) -
connector/spanmetrics: Add support for W3C tracestate-based adjusted count in span metrics with stochastic rounding (#45539)
The span metrics connector now supports extracting sampling information from W3C tracestate
to generate extrapolated span metrics with adjusted counts. This enables accurate metric
aggregation for sampled traces by computing stochastic-rounded adjusted counts based on
the sampling threshold (ot.th field) in the tracestate. Key features include:- Stochastic rounding for fractional adjusted counts using integer-only operations
- Single-entry cache for consecutive identical tracestates (4% overhead in benchmarks)
- Support for mixed-mode services where some spans have tracestate and others don't
- New sampling.method attribute to distinguish between adjusted and non-adjusted metrics
- Histogram support for observing multiple events at once
Performance characteristics:
- ~4% overhead for traces with tracestate (3-span batch: 3684ns â 3829ns). Overhead will further diminish with larger batches.
- Scales linearly with trace size (500 spans: 577Ξs â 581Ξs)
- Zero allocations for common cases with caching enabled
-
exporter/bmchelix: Enrich metric names with datapoint attributes for unique identification in BMC Helix Operations Management (#46558)
This feature is controlled by theenrich_metric_with_attributesconfiguration option (default:true).
Set tofalseto disable enrichment and reduce metric cardinality.
Normalization is applied to ensure BHOM compatibility:entityTypeIdandentityName: Invalid characters replaced with underscores (colons not allowed as they are used as separators in entityId)metricName: Normalized to match pattern[a-zA-Z_:.][a-zA-Z0-9_:.]*- Label values: Commas replaced with whitespaces
-
exporter/clickhouse: Add per pipeline JSON support for ClickHouse exporter, deprecate JSON feature gate (#46553)
Previously, theclickhouse.jsonfeature gate was used to enable JSON for all
ClickHouse exporter instances. This feature gate is now deprecated. Use thejson
config option instead, which allows per-pipeline control. -
exporter/elasticsearch: Add per-documentdynamic_templatesfor metrics in ECS mapping mode (#46499)
Each bulk index action for ECS metrics now includes dynamic_templates so Elasticsearch can apply the correct
mapping (e.g. histogram_metrics, summary_metrics, double_metrics) for the ECS mapping mode. The OTel mapping mode already sent dynamic_templates. -
exporter/elasticsearch: Addhttp.response.status_codeto failed document logs to allow for better filtering and error analysis. (#45829) -
exporter/elasticsearch: Update ECS mode encoder to add conversions fortelemetry.sdk.languageandtelemetry.sdk.version(#46690)
Conversions map semconv attributestelemetry.sdk.language/telemetry.sdk.versionto service.language.name/service.language.version' -
extension/aws_logs_encoding: Adopt streaming for Network Firewall logs (#46214) -
extension/aws_logs_encoding: Adopt streaming for CloudTrail signal (#46214) -
extension/aws_logs_encoding: Adopt encoding extension streaming contract for WAF logs (#46214) -
extension/aws_logs_encoding: Adopt streaming for S3 access logs (#46214) -
extension/aws_logs_encoding: Adopt encoding extension streaming contract for VPC flow logs (#46214) -
extension/aws_logs_encoding: Adopt encoding extension streaming contract for CloudWatch Logs subscription (#46214) -
extension/aws_logs_encoding: Adopt streaming for ELB signal (#46214) -
extension/awscloudwatchmetricstreams_encoding: Adopt encoding extension streaming contract for OpenTelemetry v1 formatted metrics (#46214) -
extension/azure_encoding: Add encoding.format attribute to Azure logs to identify the log type (#44278) -
extension/azure_encoding: Promote the Azure Encoding extension to Alpha stability. (#46886) -
extension/azure_encoding: Add processing for Azure Metrics (#41725) -
extension/datadog: Setos.typeresource attribute if not already present for Fleet Automation metadata. (#46896) -
extension/headers_setter: Add support for file-based credentials viavalue_fileconfiguration option. Files are watched for changes and header values are automatically updated. (#46473)
This is useful for credentials that are rotated, such as Kubernetes secrets.
Example configuration:
headers_setter:
headers:
- key: X-API-Key
value_file: /var/secrets/api-key -
extension/oidc: Add logging for failed authentication attempts with client IP and username. (#46482) -
internal/kafka: This change adds support for authentication via OIDC to the Kafka client. (#41872)
It provides an implementation of SASL/OAUTHBEARER for Kafka components, by
integrating with auth extensions that provide OAuth2 tokens, such as oauth2clientauth.
Token acqusition/refresh/exchange is controlled by auth extensions.To use this, your configuration would be something like:
extensions:
oauth2client:
client_id_file: /path/to/client_id_file
client_secret: /path/to/client_secret_fileexporters:
kafka:
auth:
sasl:
mechanism: OAUTHBEARER
oauthbearer_token_source: oauth2client -
pkg/azurelogs: Remove semconv v1.28.0 and v1.34.0 dependencies, migrating to v1.38.0 via paired feature gates (#45033, #45034)
Two new alpha feature gates control the migration:
pkg.translator.azurelogs.EmitV1LogConventionsemits stable attribute names (code.function.name,code.file.path,eventNameper log record).
pkg.translator.azurelogs.DontEmitV0LogConventionssuppresses the old names (code.function,code.filepath,event.nameon resource).
Both gates default to off; enableEmitV1LogConventionsfirst for a dual-emit migration window. -
pkg/coreinternal: Add feature gates to migrate semconv v1.12.0 attributes to v1.38.0 equivalents in goldendataset (#45076)
The following attribute keys fromgo.opentelemetry.io/otel/semconv/v1.12.0can now be migrated to theirv1.38.0equivalents
using feature gates (both default to disabled, preserving the old behavior):net.host.ip->network.local.address(enableinternal.coreinternal.goldendataset.EmitV1NetworkConventions)net.peer.ip->network.peer.address(enableinternal.coreinternal.goldendataset.EmitV1NetworkConventions)http.host->server.address(enableinternal.coreinternal.goldendataset.EmitV1NetworkConventions)http.server_name->server.address(enableinternal.coreinternal.goldendataset.EmitV1NetworkConventions)
To stop emitting the deprecated v1.12.0 attributes, also enableinternal.coreinternal.goldendataset.DontEmitV0NetworkConventions
(requiresinternal.coreinternal.goldendataset.EmitV1NetworkConventionsto also be enabled).
-
pkg/fileconsumer:filelogreceiver checkpoint storage now supports protobuf encoding behind a feature gate for improved performance and reduced storage usage (#43266)
Added optional protobuf encoding for filelog checkpoint storage, providing ~7x faster decoding and 31% storage savings.
Enable with feature gate:--feature-gates=filelog.protobufCheckpointEncoding
The feature is in StageAlpha (disabled by default) and includes full backward compatibility with JSON checkpoints. -
pkg/ottl: Improve unsupported type error diagnostics in the Len() OTTL function by including the runtime type in error messages. (#46476) -
pkg/stanza: Implementiffield support for the recombine operator so entries not matching the condition pass through unrecombined. (#46048) -
pkg/zipkin: Add feature gates to migrate semconv v1.12.0 attributes to v1.38.0 equivalents (#45076)
The following attribute keys fromgo.opentelemetry.io/otel/semconv/v1.12.0can now be migrated to theirv1.38.0equivalents
using feature gates (both default to disabled, preserving the old behavior):net.host.ip->network.local.address(enablepkg.translator.zipkin.EmitV1NetworkConventions)net.peer.ip->network.peer.address(enablepkg.translator.zipkin.EmitV1NetworkConventions)
To stop emitting the deprecated v1.12.0 attributes, also enablepkg.translator.zipkin.DontEmitV0NetworkConventions
(requirespkg.translator.zipkin.EmitV1NetworkConventionsto also be enabled).
-
processor/k8s_attributes: Log warning in case deprecated attributes are enabled (#46932) -
processor/k8s_attributes: Bump version of semconv to 1.40 (#46644) -
processor/redaction: Document audit trail attributes emitted whensummaryis set todebugorinfo(#46648)
Adds an Audit Trail section to the README describing the diagnostic attributes
the processor appends to spans, log records, and metric datapoints, including
a worked example. Also fixes the example output to omit zero-count attributes
that are never emitted, and restores URL Sanitization and Span Name Sanitization
as top-level README sections. -
receiver/aerospike: Enable the re-aggregation feature for the aerospike receiver (#46347) -
receiver/awslambda: Adopt encoding extension streaming for AWS Lambda receiver (#46608) -
receiver/awslambda: Promote AWS Lambda receiver to Alpha stability. (#46888) -
receiver/cisco_os: Add cisco_os receiver to the contrib distribution (#46948) -
receiver/cloudflare: Addmax_request_body_sizeconfig option. (#46630) -
receiver/docker_stats: Enables dynamic metric reaggregation in the Docker Stats receiver. This does not break existing configuration files. (#45396) -
receiver/filelog: Addinclude_file_permissionsoption (#46504) -
receiver/flinkmetrics: Enable re-aggregation feature by classifying attributes with requirement_level and setting reaggregation_enabled to true (#46356)
Attributes are classified as required when aggregating across them produces meaningless results
(checkpoint, garbage_collector_name, record), and recommended when totals remain operationally
meaningful (operator_name). -
receiver/github: Enables dynamic metric reaggregation in the GitHub receiver. This does not break existing configuration files. (#46385) -
receiver/haproxy: Addhaproxy.server.stateresource attribute to expose server status (UP, DOWN, MAINT, etc.) (#46799)
The new resource attribute is disabled by default and can be enabled via configuration. -
receiver/hostmetrics: Enable dynamic metric reaggregation for the cpu scraper in the hostmetrics receiver. (#46386) -
receiver/hostmetrics: Enable re-aggregation feature for the memory scraper to support dynamic metric attribute configuration at runtime. (#46618) -
receiver/hostmetrics: Enable re-aggregation feature for the load scraper by settingreaggregation_enabled. (#46617) -
receiver/hostmetrics: Enable metric re-aggregation for paging scrapers. (#46386, #46621) -
receiver/hostmetrics: Enables re-aggregation for nfs scraper (#46386, #46620) -
receiver/hostmetrics: Enable re-aggregation feature for the filesystem scraper by settingreaggregation_enabledand addingrequirement_levelto attributes. (#46616) -
receiver/hostmetrics: Enable re-aggregation for processes scraper (#46622)
Enabled the reaggregation feature gate for the processes scraper and set the status attribute requirement level to recommended. -
receiver/hostmetrics: Enable re-aggregation feature for the disk scraper by settingreaggregation_enabledand addingrequirement_levelto attributes. (#46615) -
receiver/hostmetrics: Enable re-aggregation feature for the network scraper by settingreaggregation_enabledand addingrequirement_levelto attributes. (#46619) -
receiver/iis: Enable re-aggregation and set requirement levels for attributes. (#46360) -
receiver/kafka: add kafka.topic, kafka.partition, kafka.offset to client metadata (#45931) -
receiver/kafkametrics: Enable re-aggregation feature for kafkametrics receiver to support dynamic metric attribute configuration at runtime. (#46362) -
receiver/mysql: Enables dynamic metric reaggregation in the MySQL receiver. This does not break existing configuration files. (#45396) -
receiver/oracledb: Addoracledb.procedure_execution_countattribute to top query events for stored procedure execution tracking (#46487)
This value is derived from MAX(EXECUTIONS) across all SQL statements
sharing the same PROGRAM_ID in V$SQL, providing
an accurate procedure-level execution count even for multi-statement stored procedures. -
receiver/oracledb: Addoracledb.command_typeattribute to the Top-Query collection. (#46838) -
receiver/podman_stats: Enable dynamic metric reaggregation in the Podman receiver. (#46372) -
receiver/postgresql: Enables dynamic metric reaggregation in the PostgreSQL receiver. This does not break existing configuration files. (#45396) -
receiver/pprof: promote to alpha (#46925) -
receiver/pprof: Read pprof data from HTTP remote endpoints or the collector itself (#38260) -
receiver/prometheus: Graduatereceiver.prometheusreceiver.RemoveReportExtraScrapeMetricsConfigfeature gate to stable; deprecatereceiver.prometheusreceiver.EnableReportExtraScrapeMetricsfeature gate (#44181)
Thereport_extra_scrape_metricsconfiguration option is now fully ignored; remove it from your configuration to avoid crashes.
Thereceiver.prometheusreceiver.EnableReportExtraScrapeMetricsfeature gate is deprecated and will be removed in v0.148.0; use theextra_scrape_metricsPrometheus scrape configuration option instead. -
receiver/rabbitmq: Enable dynamic metric reaggregation in the RabbitMQ receiver. (#46374) -
receiver/redis: Enable dynamic metric reaggregation in the Redis receiver. (#46376) -
receiver/riak: Enable re-aggregation and set requirement levels for attributes. (#46377) -
receiver/snowflake: Bump Go Snowflake Driver to v2 (#46598) -
receiver/sqlquery: Bump Go Snowflake Driver to v2 (#46598) -
receiver/sqlserver: Addsqlserver.procedure_execution_countattribute to the Top-Query collection. (#46486) -
receiver/statsd: Add counter_type configuration option to control how counter values are represented (int, float, or stochastic_int) (#45276) -
receiver/systemd: Enable dynamic metric reaggregation in the systemd receiver. (#46381) -
receiver/tcplog: Add default values for retry_on_failure and update the documentation (#41571) -
receiver/vcenter: Enable re-aggregation feature for vcenter receiver metrics (#46384) -
receiver/windowseventlog: Add SID resolution feature to automatically resolve Windows Security Identifiers to user and group names (#45875)
Added newresolve_sidsconfiguration option with configurable cache size and TTL.
When enabled, Windows Security Identifiers (SIDs) in event logs are automatically resolved to human-readable names using the Windows LSA API.
Includes support for well-known SIDs, domain users and groups, and high-performance LRU caching for improved throughput.
ð§° Bug fixes ð§°
-
exporter/elasticsearch: Setrequire_data_stream=truefor ECS mapping mode and improve guidance for Elasticsearch version compatibility. (#46632) -
exporter/elasticsearch: Fix retry exponential backoff overflow handling edge case (#46178)
The retry delay growth now guards against duration overflow while preserving
exponential backoff with jitter, so retries cap correctly at the configured
max interval even for large attempt counts. -
exporter/kafka: Validate thattopic_from_metadata_keyis present ininclude_metadata_keyswhen configured, with clear config validation errors. (#46711) -
exporter/kafka: AddMergeCtxto preserveinclude_metadata_keyswhen batching is enabled. (#46718) -
exporter/signalfx: exporter/signalfxexporter: include inactive in memory total (#46474) -
extension/bearertokenauth: Fix bearer token auth rejecting custom headers in HTTP requests unless specified in canonical form (#45697) -
extension/file_storage: Fix nil pointer crash when bbolt reopen fails during on_rebound compaction (#46489) -
extension/google_cloud_logentry_encoding: Fix incorrect snake_case conversion for keys containing numbers (e.g., "k8s" becoming "k8_s") in Google Cloud log entries. (#46571) -
internal/metadataproviders: Fix HTTP response body leak in OpenShift metadata provider and add status code validation (#46921)
Three methods in the OpenShift metadata provider (OpenShiftClusterVersion, Infrastructure,
K8SClusterVersion) were not closing the HTTP response body after making requests. This leaked
HTTP connections and file descriptors, which could exhaust the connection pool over time when
periodic refresh is enabled. The fix adds defer resp.Body.Close() and validates the HTTP
status code before attempting to decode the response. -
processor/resourcedetection: Fix consul detectortoken_filesetting the file path as the literal token value instead of configuring the consul SDK to read the file (#46745)
Whentoken_filewas configured, the file path string
was assigned toapi.Config.Tokeninstead ofapi.Config.TokenFile,
causing the consul API client to use the path as the authentication
token (always resulting in 403 Forbidden)." -
processor/resourcedetection: Fix collector panic on shutdown when the same processor is used in multiple pipelines with refresh_interval enabled. (#46918) -
receiver/datadog: Preserve original per-span service name when_dd.base_serviceoverrides the resource-level service name, so the DD exporter can recover it on the DD-to-OTEL-to-DD roundtrip path. (#1909) -
receiver/mysql: Fixed incorrect JOIN condition in querySample.tmpl that was comparing thread.thread_id to processlist.id instead of the correct foreign key thread.processlist_id. (#46548)
The LEFT JOIN with information_schema.processlist was using an incorrect join condition that would fail to properly correlate rows between the performance_schema.threads and information_schema.processlist tables. The fix changes the join condition fromprocesslist.id = thread.thread_idtoprocesslist.id = thread.processlist_idto use the correct foreign key relationship. -
receiver/oracledb: Fix to top_query reporting incorrect procedure execution count. (#46869)
The procedure execution count is now calculated using MIN(EXECUTIONS) instead of MAX(EXECUTIONS) improving best effort accuracy. -
receiver/postgresql: Fix EXPLAIN plan collection failing on DDL statements (GRANT, DROP, REVOKE, etc.) (#46274)
PostgreSQL does not support EXPLAIN on DDL statements. The receiver now filters queries
using a whitelist approach, only running EXPLAIN on supported DML statements (SELECT,
INSERT, UPDATE, DELETE, WITH, MERGE, TABLE, VALUES). -
receiver/prometheus: receiver/prometheusreceiver: return stable SeriesRef from AppendHistogram for correct per-series staleness tracking (#44528) -
receiver/prometheus: Validate target allocator interval during configuration to prevent runtime panic when interval is set to 0 or a negative value. (#46700)
Previously, settingintervalto 0s or a negative value would cause a runtime panic when
time.NewTicker()was called with an invalid duration. The configuration is now validated
early to prevent this panic and provide a clear error message. -
receiver/sqlserver: Fix to top_query reporting duplicate rows for a procedure that has more than one statement. (#46483)
The dbQueryAndTextQuery.tmpl template joins the aggregated CTE rows back to sys.dm_exec_query_stats using only plan_handle. But plan_handle is not unique in that DMV,
it identifies a plan, and a single plan can contain multiple statements (each with its own row in sys.dm_exec_query_stats,
differentiated by statement_start_offset/statement_end_offset). As a result this is producing duplicate rows for a procedure that has more than one statement. -
receiver/sqlserver: Fixedhost.nameresource attribute to be correctly extracted fromdatasourceconfiguration whenserveris not set (#42355)
When using thedatasourceconfiguration option, thehost.nameresource attribute will now be
properly parsed from the datasource connection string instead of being left empty or using an incorrect value. -
receiver/sqlserver: Add missinghost.namefor logs when usingdatasourceconfiguration (#46740) -
receiver/windowseventlog: Strip illegal XML 1.0 characters (e.g. U+0001) from event data before parsing to prevent parse failures on Sysmon Operational events. (#46435)
Some Sysmon events embed control characters (e.g. U+0001) in fields such as FileVersion.
Go's encoding/xml rejects these as illegal XML 1.0 characters, causing an error for every
affected event. The characters are now silently stripped before parsing.
API Changelog
ðĄ Enhancements ðĄ
-
pkg/azurelogs: Remove semconv v1.28.0 and v1.34.0 dependencies, migrating to v1.38.0 via paired feature gates (#45033, #45034)
Two new alpha feature gates control the migration:
pkg.translator.azurelogs.EmitV1LogConventionsemits stable attribute names (code.function.name,code.file.path,eventNameper log record).
pkg.translator.azurelogs.DontEmitV0LogConventionssuppresses the old names (code.function,code.filepath,event.nameon resource).
Both gates default to off; enableEmitV1LogConventionsfirst for a dual-emit migration window. -
pkg/datadog: Expose feature gate to infer intervals for delta metrics. (#46851) -
pkg/xstreamencoding: Add stream decoding adapters for unmarshaler interfaces (#46754) -
processor/tail_sampling: Add hooks to call when a sampling decision is made for a trace. (#46161) -
receiver/github: Enables dynamic metric reaggregation in the GitHub receiver. This does not break existing configuration files. (#46385)
We are thrilled to welcome our first-time contributors to this project. Thank you for your contributions @rite7sh, @strawgate, @aabhinavvvvvvv, @thisteensy, @rgoomar, @esosaoh, @rluidash, @dakshhhhh16, @tetianakravchenko, @Hiruma31, @Juoper, @ebrdarSplunk, @Thitipong-PP, @57Ajay, @AsishRaju, @markobachvarovski, @richscott, @must108, @postnati, @hawkaii, @Garbett1, @Xepheryy, @neilkuan, @Shawn-Dong, @orestisfl ! ð