Releases: aws/aws-for-fluent-bit
AWS for Fluent Bit 2.28.3
2.28.3
This release includes:
- Fluent Bit 1.9.9
- Amazon CloudWatch Logs for Fluent Bit 1.9.0
- Amazon Kinesis Streams for Fluent Bit 1.10.0
- Amazon Kinesis Firehose for Fluent Bit 1.7.0
Important Note:
- A security vulnerability was found in golang which we use to build our go plugins. This new image builds the go plugins with latest golang and resolves the CVE.
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(2695) | ✅ | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(20582) | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(1000) | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(500) | ||
s3 | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | ✅ | 11%(200028) |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | ✅ | 5%(66516) | 33%(599933) |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.28.2
2.28.2
This release includes:
- Fluent Bit 1.9.9
- Amazon CloudWatch Logs for Fluent Bit 1.9.0
- Amazon Kinesis Streams for Fluent Bit 1.10.0
- Amazon Kinesis Firehose for Fluent Bit 1.7.0
Compared to 2.28.1
this release adds:
- Bug - Stop trace_error from truncating the OpenSearch API call response fluentbit:5788
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | 0%(339) | ✅ | 0%(10173) |
Log Duplication | ✅ | 0%(5210) | 1%(358871) | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(964) | 0%(16734) | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(32586) | 0%(37918) | 0%(25494) | ||
s3 | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | ✅ | ✅ | 0%(1370) |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.28.1
2.28.1
This release includes:
- Fluent Bit 1.9.8
- Amazon CloudWatch Logs for Fluent Bit 1.9.0
- Amazon Kinesis Streams for Fluent Bit 1.10.0
- Amazon Kinesis Firehose for Fluent Bit 1.7.0
Compared to 2.28.0
this release adds the following feature that we are working on getting accepted upstream:
- Bug - Resolve long tag segfault issue. Without this patch, Fluent Bit may segfault if it encounters tags over 256 characters in length. fluentbit:5753
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | 20Mb/s | 25Mb/s | 30Mb/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(500) | ✅ | |
kinesis_streams | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(23000) | ✅ | 0%(30996) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1Mb/s | 2Mb/s | 3Mb/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.28.0
2.28.0
This release includes:
- Fluent Bit 1.9.7
- Amazon CloudWatch Logs for Fluent Bit 1.9.0
- Amazon Kinesis Streams for Fluent Bit 1.10.0
- Amazon Kinesis Firehose for Fluent Bit 1.7.0
AWS For Fluent Bit New Feature Announcement:
- New Image Tags - Added
init
tagged images with an init process that downloads multiple config and parser files from S3 and sets ECS Metadata as env vars. Check out the docs for the new Fluent Bit ECS Init Image tags.
Compared to 2.27.0
this release adds:
- Feature - Add gzip compression support for multipart uploads in S3 Output plugin
- Bug - S3 output key formatting inconsistent rendering of
$TAG[n]
aws-for-fluent-bit:376 - Bug - fix concurrency issue in S3 key formatting
- Bug -
cloudwatch_logs
plugin fix skip counting empty events
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | 20Mb/s | 25Mb/s | 30Mb/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(920) | ✅ | ✅ | |
kinesis_streams | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(500) | 0%(1000) | 0%(500) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1Mb/s | 2Mb/s | 3Mb/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 2%(53893) |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.27.0
2.27.0
This release includes:
- An Amazon Linux 2 Base
- Fluent Bit 1.9.6
- Amazon CloudWatch Logs for Fluent Bit 1.9.0
- Amazon Kinesis Streams for Fluent Bit 1.10.0
- Amazon Kinesis Firehose for Fluent Bit 1.7.0
Compared to 2.26.0
this release adds:
- Feature - Add support for record accessor on
cloudwatch_logs
plugin fluentbit:3246 - Enhancement - Update S3 PutObject size to 1GB s3:5688
- Bug - Clear last recently used parser to match next parser for multiline filter fluentbit:5524
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | 20Mb/s | 25Mb/s | 30Mb/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | |
kinesis_streams | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(500) | ✅ | 0%(1000) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1Mb/s | 2Mb/s | 3Mb/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(1007) | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.26.0
2.26.0
This release includes:
- An Amazon Linux 2 Base
- Fluent Bit 1.9.4
- Amazon CloudWatch Logs for Fluent Bit 1.8.0
- Amazon Kinesis Streams for Fluent Bit 1.9.0
- Amazon Kinesis Firehose for Fluent Bit 1.6.1
Compared to 2.25.1
this release adds:
- Feature - Add
auto_create_stream
option cloudwatch:257 - Feature - Enable Apache Arrow support in S3 at compile time s3:3184
- Enhancement - Add debug logs to check batch sizes fluentbit:5428
- Enhancement - Set 1 worker as default for
cloudwatch_logs
plugin fluentbit:5417 - Bug - Allow recovery from a stream being deleted and created by a user cloudwatch:257
Same as 2.25.1
, this release includes the following enhancement for AWS customers that has been accepted by upstream:
- Enhancement - Add
kube_token_ttl
option to kubernetes filter to support refreshing the service account token used to talk to the API server. Prior to this change Fluent Bit would only read the token on startup. fluentbit:5332
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | 20Mb/s | 25Mb/s | 30Mb/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | 0%(839) |
Log Duplication | ✅ | ✅ | ✅ | |
kinesis_streams | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1Mb/s | 2Mb/s | 3Mb/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.25.1
2.25.1
This release includes:
- An Amazon Linux 2 Base
- Fluent Bit 1.9.3
- Amazon CloudWatch Logs for Fluent Bit 1.7.0
- Amazon Kinesis Streams for Fluent Bit 1.9.0
- Amazon Kinesis Firehose for Fluent Bit 1.6.1
Compared to 2.25.0
this release adds:
- Bug - Fix new
kube_token_ttl
option in kubernetes filter to correctly parse TTL as a time value aws-for-fluent-bit:353
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load.
plugin | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(500) | ✅ | |
kinesis_streams | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(26304/12M) | 0% (48464/15M) | 0% (43360/18M) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.25.0
2.25.0
This release includes:
- An Amazon Linux 2 Base
- Fluent Bit 1.9.3
- Amazon CloudWatch Logs for Fluent Bit 1.7.0
- Amazon Kinesis Streams for Fluent Bit 1.9.0
- Amazon Kinesis Firehose for Fluent Bit 1.6.1
Compared to 2.24.0
this release adds the following feature that we are working on getting accepted upstream:
- Enhancement - Add
kube_token_ttl
option to kubernetes filter to support refreshing the service account token used to talk to the API server. Prior to this change Fluent Bit would only read the token on startup. fluentbit:5332
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load.
plugin | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(1275) | |
kinesis_streams | Log Loss | 0% (3113) | 0% (6127) | 0% (12780) |
Log Duplication | ✅ | 0% (500) | 0% (1000) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | 23% |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.24.0
Changelog
2.24.0
This release includes:
- An Amazon Linux 2 Base
- Fluent Bit 1.9.3
- Amazon CloudWatch Logs for Fluent Bit 1.7.0
- Amazon Kinesis Streams for Fluent Bit 1.9.0
- Amazon Kinesis Firehose for Fluent Bit 1.6.1
Compared to 2.23.4
this release adds:
- Bug - Resolve IMDSv1 fallback error introduced in 2.21.0 aws-for-fluent-bit:259
- Bug - Cloudwatch Fix integer overflow on 32 bit systems when converting tv_sec to millis fluentbit:3640
- Enhancement - Only create Cloudwatch Logs log group if it does not already exist to prevent throttling fluentbit:4826
- Enhancement - Implement docker log driver partial message support for multiline buffered mode aws-for-fluent-bit:25, partial message mode example
- Enhancement - Gracefully handle Cloudwatch Logs DataAlreadyAcceptedException fluentbit:4948
- Feature - Add sigv4 authentication options to HTTP output plugin fluentbit:5165
- Feature - Add Firehose compression configuration options fluentbit:4371
- New Plugin -
opensearch
plugin in Fluent Bit core
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load.
plugin | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | |
kinesis_streams | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0% (1000/1.8M) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.23.4
Changelog
2.23.4
This release includes:
- An Amazon Linux 2 Base
- Fluent Bit 1.8.15
- Amazon CloudWatch Logs for Fluent Bit 1.7.0
- Amazon Kinesis Streams for Fluent Bit 1.9.0
- Amazon Kinesis Firehose for Fluent Bit 1.6.1
Compared to 2.23.3
this release adds:
- Go version upgrade to 1.17.9
Same as 2.23.3
, this release includes the following fix for AWS customers that we are working on getting accepted upstream:
- Bug - Resolve IMDSv1 fallback error introduced in 2.21.0 aws-for-fluent-bit:259
Important Note:
- A security vulnerability was found in amazonlinux which we use as our base image. This new image will be based on an updated version of amazonlinux that resolves this CVE.
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load.
plugin | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(841) | 0%(500) | |
kinesis_streams | Log Loss | 0% (3113) | 0% (6127) | 0% (12780) |
Log Duplication | 0.1%(12936) | 0% (1339) | 0% (491) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | 0.1%(1339) | 0%(968) |
Log Duplication | ✅ | 5%(63441) | 4%(73511) |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- CloudWatch has own throughput limit for single log stream. Based on our tests, it starts to appear throttling issue after input load > 1Mb/s.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.