2.23.2
This release includes:
- An Amazon Linux 2 Base
- Fluent Bit 1.8.14
- Amazon CloudWatch Logs for Fluent Bit 1.7.0
- Amazon Kinesis Streams for Fluent Bit 1.9.0
- Amazon Kinesis Firehose for Fluent Bit 1.6.1
Compared to 2.23.1
this release adds:
- Enhancement - Mitigate throttling issue on log group in cloudwatch_logs plugin fluentbit:4826
Same as 2.23.1
, this release includes the following fix for AWS customers that we are working on getting accepted upstream:
- Bug - Resolve IMDSv1 fallback error introduced in 2.21.0 aws-for-fluent-bit:259
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load.
plugin | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ➖ | 0%(341) * | ➖ |
Log Duplication | ➖ | ➖ | 0%(500) | |
kinesis_streams | Log Loss | ➖ | 0%(3824) | ➖ |
Log Duplication | 0%(1819) | ➖ | ➖ | |
s3 | Log Loss | ➖ | ➖ | ➖ |
Log Duplication | ➖ | ➖ | ➖ |
plugin | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | 2%(17084) | 0%(1671) | 0%(2678) |
Log Duplication | ➖ | 7%(77539) | 6%(80560) |
Note:
- Number in parentheses means the number of records out of total records. For example, 0%(341) means 341 duplicate records out of 15M input records by which log duplication percentage is 0%.
- CloudWatch has own throughput limit for single log stream. Based on our tests, it starts to appear throttling issue after input load > 1Mb/s.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.