AWS for Fluent Bit 2.24.0
Changelog
2.24.0
This release includes:
- An Amazon Linux 2 Base
- Fluent Bit 1.9.3
- Amazon CloudWatch Logs for Fluent Bit 1.7.0
- Amazon Kinesis Streams for Fluent Bit 1.9.0
- Amazon Kinesis Firehose for Fluent Bit 1.6.1
Compared to 2.23.4
this release adds:
- Bug - Resolve IMDSv1 fallback error introduced in 2.21.0 aws-for-fluent-bit:259
- Bug - Cloudwatch Fix integer overflow on 32 bit systems when converting tv_sec to millis fluentbit:3640
- Enhancement - Only create Cloudwatch Logs log group if it does not already exist to prevent throttling fluentbit:4826
- Enhancement - Implement docker log driver partial message support for multiline buffered mode aws-for-fluent-bit:25, partial message mode example
- Enhancement - Gracefully handle Cloudwatch Logs DataAlreadyAcceptedException fluentbit:4948
- Feature - Add sigv4 authentication options to HTTP output plugin fluentbit:5165
- Feature - Add Firehose compression configuration options fluentbit:4371
- New Plugin -
opensearch
plugin in Fluent Bit core
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load.
plugin | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | |
kinesis_streams | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0% (1000/1.8M) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.