github aws/aws-for-fluent-bit v2.26.0
AWS for Fluent Bit 2.26.0

latest releases: v2.32.2.20240425, v2.32.2.20230422, v2.32.2...
23 months ago

2.26.0

This release includes:

  • An Amazon Linux 2 Base
  • Fluent Bit 1.9.4
  • Amazon CloudWatch Logs for Fluent Bit 1.8.0
  • Amazon Kinesis Streams for Fluent Bit 1.9.0
  • Amazon Kinesis Firehose for Fluent Bit 1.6.1

Compared to 2.25.1 this release adds:

  • Feature - Add auto_create_stream option cloudwatch:257
  • Feature - Enable Apache Arrow support in S3 at compile time s3:3184
  • Enhancement - Add debug logs to check batch sizes fluentbit:5428
  • Enhancement - Set 1 worker as default for cloudwatch_logs plugin fluentbit:5417
  • Bug - Allow recovery from a stream being deleted and created by a user cloudwatch:257

Same as 2.25.1, this release includes the following enhancement for AWS customers that has been accepted by upstream:

  • Enhancement - Add kube_token_ttl option to kubernetes filter to support refreshing the service account token used to talk to the API server. Prior to this change Fluent Bit would only read the token on startup. fluentbit:5332

We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit under different input load. Learn more about the load test.

plugin 20Mb/s 25Mb/s 30Mb/s
kinesis_firehose Log Loss 0%(839)
Log Duplication
kinesis_streams Log Loss
Log Duplication
s3 Log Loss
Log Duplication
plugin 1Mb/s 2Mb/s 3Mb/s
cloudwatch_logs Log Loss
Log Duplication

Note:

  • The green check ✅ in the table means no log loss or no log duplication.
  • Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
  • For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
  • Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.

Don't miss a new aws-for-fluent-bit release

NewReleases is sending notifications on new releases.