2.31.9
This release includes:
- Fluent Bit 1.9.10
- Amazon CloudWatch Logs for Fluent Bit 1.9.3
- Amazon Kinesis Streams for Fluent Bit 1.10.2
- Amazon Kinesis Firehose for Fluent Bit 1.7.2
Compared to 2.31.8
this release adds:
- Enhancement - Add clear info message when chunks are removed because
storage.total_limit_size
is reached fluent-bit:6719 - Bug - Fix S3 ARN parsing in init image that prevents it from being used in US Gov Cloud and China partitions aws-for-fluent-bit:617
- Bug - Fix SIGSEGV on shutdown when multiple instances of the same go plugin are configured aws-for-fluent-bit:613
- Bug - Fix off by one error that can lead to SDS string truncation fluent-bit:7143
- Bug - fix minor memory leak in cloudwatch_logs that leads no more than ~1KB of un-freed memory when the
log_stream_name
option is configured. - Bug - Fix SIGSEGV on shutdown when multiple instances of the same go plugin are configured aws-for-fluent-bit:613
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(3000) | 0%(500) | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | 0%(9623) |
Log Duplication | ✅ | ✅ | 0%(16500) | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(21162) | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | 0%(5850) |
Log Duplication | ✅ | ✅ | 0%(62574) | ||
s3 | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | ✅ | 1%(15080) | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.