Skip to content

Commit

Permalink
Bug fix an update s3 buckets options (#125)
Browse files Browse the repository at this point in the history
* update custom lambda to handle multiple buckets

* fix issue with permissions

* Update changelog

* update readme file

* update alow pattern for bucket paramter

* update template version

* align cargo.toml version with template version

* fix typo

* update changelog date

---------

Co-authored-by: Concourse <[email protected]>
  • Loading branch information
guyrenny and coralogix-concourse authored Jan 7, 2025
1 parent dfe4c80 commit 9ec574b
Show file tree
Hide file tree
Showing 5 changed files with 106 additions and 98 deletions.
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# Changelog

### v1.2.0 / 2025-1-7
### 🧰 Bug fixes 🧰
- Add permissions to custom lambda for `event-source-mapping`
### 💡 Enhancements 💡
- Add support to deploy 1 integration with multiple S3 buckets by passing comma seperated list to `S3BucketName` parameter

### v1.1.2 / 2025-12-31
### 🧰 Bug fixes 🧰
- cds-1756 - Restricted Lambda `EventSourceMapping` permissions used by custom resource function, so it won't have a wildcard/full resource access
Expand Down
2 changes: 1 addition & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "coralogix-aws-shipper"
version = "1.1.2"
version = "1.2.0"
edition = "2021"

[dependencies]
Expand Down
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ If you don’t want to send data directly as it enters S3, you can also use SNS/

| Parameter | Description | Default Value | Required |
|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------|--------------------|
| S3BucketName | Specify the name of the AWS S3 bucket that you want to monitor. | | :heavy_check_mark: |
| S3BucketName | Comma-separated list of names for the AWS S3 buckets that you want to monitor. | | :heavy_check_mark: |
| S3KeyPrefix | Specify the prefix of the log path within your S3 bucket. This value is ignored if you use the SNSTopicArn/SQSTopicArn parameter. | CloudTrail/VpcFlow 'AWSLogs/' | |
| S3KeySuffix | Filter for the suffix of the file path in your S3 bucket. This value is ignored if you use the SNSTopicArn/SQSTopicArn parameter. | CloudTrail '.json.gz', VpcFlow '.log.gz' | |
| NewlinePattern | Enter a regular expression to detect a new log line for multiline logs, e.g., \n(?=\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3}). | | |
Expand Down Expand Up @@ -324,6 +324,9 @@ To enable the DLQ, you must provide the required parameters outlined below.
| DLQRetryLimit | The number of times a failed event should be retried before being saved in S3 | 3 | :heavy_check_mark: |
| DLQRetryDelay | The delay in seconds between retries of failed events | 900 | :heavy_check_mark: |

> [!NOTE]
> In the template we use `arn:aws:s3:::*` for the S3 integration because of CF limitation, it is not an option to loop through the s3 bucket and specify permissions to each one. After the lambda gets created you can manually change the permissions to only allow access to your S3 buckets.

## Troubleshooting

**Parameter max value** If you tried to deploy the integration and got this error `length is greater than 4094`, then you can upload the value of the parameter to an S3 bucket as txt and pass the file URL as the parameter value ( this option is available for `KafkaTopic` and `CloudWatchLogGroupName` parameters).
Expand Down
177 changes: 88 additions & 89 deletions custom-resource/index.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,78 +120,78 @@ def __init__(self, event, context, cfn):
self.params = SimpleNamespace(**event['ResourceProperties']['Parameters'])

@handle_exceptions
def handle_lambda_permissions(self, bucket_name, lambda_function_arn, function_name, request_type):
statement_id = f'allow-s3-invoke-{function_name}'
if request_type == 'Delete':
response = self.aws_lambda.remove_permission(
def handle_lambda_permissions(self, bucket_name_list, lambda_function_arn, function_name, request_type):
for bucket_name in bucket_name_list.split(","):
statement_id = f'allow-s3-{bucket_name}-invoke-{function_name}'
if request_type == 'Delete':
response = self.aws_lambda.remove_permission(
FunctionName=lambda_function_arn,
StatementId=statement_id
)
print("Permission removed from Lambda function:", response)
return
response = self.aws_lambda.add_permission(
FunctionName=lambda_function_arn,
StatementId=statement_id
StatementId=f'allow-s3-{bucket_name}-invoke',
Action='lambda:InvokeFunction',
Principal='s3.amazonaws.com',
SourceArn=f'arn:aws:s3:::{bucket_name}'
)
print("Permission removed from Lambda function:", response)
return

response = self.aws_lambda.add_permission(
FunctionName=lambda_function_arn,
StatementId='allow-s3-invoke',
Action='lambda:InvokeFunction',
Principal='s3.amazonaws.com',
SourceArn=f'arn:aws:s3:::{bucket_name}'
)
print("Permission added to Lambda function:", response)
print("Permission added to Lambda function:", response)

@handle_exceptions
def create(self):
print("Request Type:", self.event['RequestType'])
bucket = self.params.S3BucketName
function_name = self.event['ResourceProperties']['LambdaArn'].split(':')[-1]
BucketNotificationConfiguration = self.s3.get_bucket_notification_configuration(
Bucket=bucket
)
BucketNotificationConfiguration.pop('ResponseMetadata')
BucketNotificationConfiguration.setdefault('LambdaFunctionConfigurations', [])
for bucket in self.params.S3BucketName.split(","):
function_name = self.event['ResourceProperties']['LambdaArn'].split(':')[-1]
BucketNotificationConfiguration = self.s3.get_bucket_notification_configuration(
Bucket=bucket
)
BucketNotificationConfiguration.pop('ResponseMetadata')
BucketNotificationConfiguration.setdefault('LambdaFunctionConfigurations', [])

BucketNotificationConfiguration['LambdaFunctionConfigurations'].append({
'Id': self.event.get('PhysicalResourceId', self.context.aws_request_id),
'LambdaFunctionArn': self.event['ResourceProperties']['LambdaArn'],
'Filter': {
'Key': {
'FilterRules': [
{
'Name': 'prefix',
'Value': self.params.S3KeyPrefix
},
{
'Name': 'suffix',
'Value': self.params.S3KeySuffix
},
]
}
},
'Events': [
's3:ObjectCreated:*'
]
})
if len(BucketNotificationConfiguration['LambdaFunctionConfigurations']) == 0:
BucketNotificationConfiguration.pop('LambdaFunctionConfigurations')
print(f'nofication configuration: {BucketNotificationConfiguration}')

err = self.handle_lambda_permissions(
bucket,
self.event['ResourceProperties']['LambdaArn'],
function_name,
self.event['RequestType']
)

if err:
raise Exception(err)
BucketNotificationConfiguration['LambdaFunctionConfigurations'].append({
'Id': self.event.get('PhysicalResourceId', self.context.aws_request_id),
'LambdaFunctionArn': self.event['ResourceProperties']['LambdaArn'],
'Filter': {
'Key': {
'FilterRules': [
{
'Name': 'prefix',
'Value': self.params.S3KeyPrefix
},
{
'Name': 'suffix',
'Value': self.params.S3KeySuffix
},
]
}
},
'Events': [
's3:ObjectCreated:*'
]
})
if len(BucketNotificationConfiguration['LambdaFunctionConfigurations']) == 0:
BucketNotificationConfiguration.pop('LambdaFunctionConfigurations')
print(f'nofication configuration: {BucketNotificationConfiguration}')

if self.event['RequestType'] != 'Delete':
print('creating bucket notification configuration...')
self.s3.put_bucket_notification_configuration(
Bucket=bucket,
NotificationConfiguration=BucketNotificationConfiguration
err = self.handle_lambda_permissions(
bucket,
self.event['ResourceProperties']['LambdaArn'],
function_name,
self.event['RequestType']
)

if err:
raise Exception(err)

if self.event['RequestType'] != 'Delete':
print('creating bucket notification configuration...')
self.s3.put_bucket_notification_configuration(
Bucket=bucket,
NotificationConfiguration=BucketNotificationConfiguration
)

# responseStatus = self.cfn.SUCCESS
print(self.event['RequestType'], "request completed....")

Expand All @@ -207,36 +207,35 @@ def update(self):
@handle_exceptions
def delete(self):
lambda_function_arn = self.event['ResourceProperties']['LambdaArn']
bucket = self.params.S3BucketName
for bucket in self.params.S3BucketName.split(","):
# Get the current notification configuration for the bucket
response = self.s3.get_bucket_notification_configuration(Bucket=bucket)

# Get the current notification configuration for the bucket
response = self.s3.get_bucket_notification_configuration(Bucket=bucket)

# Remove Lambda function triggers from the notification configuration
configs = response.get('LambdaFunctionConfigurations', [])
if len(configs) == 0:
print('no previous notification configurations found...')
return

updated_configuration = {
'LambdaFunctionConfigurations': [
config for config in configs
if config['LambdaFunctionArn'] != lambda_function_arn
],
# Preserve other configurations (if any)
'TopicConfigurations': response.get('TopicConfigurations', []),
'QueueConfigurations': response.get('QueueConfigurations', [])
}

print("Updated Configuration:", updated_configuration)
# Remove Lambda function triggers from the notification configuration
configs = response.get('LambdaFunctionConfigurations', [])
if len(configs) == 0:
print('no previous notification configurations found...')
return

updated_configuration = {
'LambdaFunctionConfigurations': [
config for config in configs
if config['LambdaFunctionArn'] != lambda_function_arn
],
# Preserve other configurations (if any)
'TopicConfigurations': response.get('TopicConfigurations', []),
'QueueConfigurations': response.get('QueueConfigurations', [])
}

print("Updated Configuration:", updated_configuration)

# Update the bucket's notification configuration
self.s3.put_bucket_notification_configuration(
Bucket=bucket,
NotificationConfiguration=updated_configuration
)
# Update the bucket's notification configuration
self.s3.put_bucket_notification_configuration(
Bucket=bucket,
NotificationConfiguration=updated_configuration
)

print(f"Removed Lambda function {lambda_function_arn} trigger from S3 bucket {bucket}.")
print(f"Removed Lambda function {lambda_function_arn} trigger from S3 bucket {bucket}.")

def handle(self):
responseStatus = self.cfn.SUCCESS
Expand Down
14 changes: 7 additions & 7 deletions template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Metadata:
- kinesis
- cloudfront
HomePageUrl: https://coralogix.com
SemanticVersion: 1.1.2
SemanticVersion: 1.2.0
SourceCodeUrl: https://github.com/coralogix/coralogix-aws-shipper

AWS::CloudFormation::Interface:
Expand Down Expand Up @@ -184,7 +184,7 @@ Parameters:
Type: String
Description: |
The name of the AWS S3 bucket to watch
AllowedPattern: '^[0-9A-Za-z\.\-_]*(?<!\.)$'
AllowedPattern: '^([0-9A-Za-z\.\-_]+)(,[0-9A-Za-z\.\-_]+)*$'
Default: "none"
MaxLength: 63

Expand Down Expand Up @@ -650,9 +650,8 @@ Resources:
- Effect: Allow
Action:
- 's3:GetObject'
Resource: !Sub "arn:aws:s3:::${S3BucketName}/*"
Resource: arn:aws:s3:::*
- !Ref AWS::NoValue

# SQS Policy
- !If
- SQSIsSet
Expand Down Expand Up @@ -1193,15 +1192,16 @@ Resources:
- lambda:UpdateEventSourceMapping
- lambda:GetFunctionConfiguration
- lambda:UpdateFunctionConfiguration
Resource: !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:*
Resource:
- !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:*
- !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:event-source-mapping:*
- Statement:
- Sid: S3NotificationPolicy
Effect: Allow
Action:
- s3:GetBucketNotification
- s3:PutBucketNotification
Resource:
Fn::Sub: arn:aws:s3:::${S3BucketName}
Resource: arn:aws:s3:::*
- Sid: PutSubscriptionFilter
Effect: Allow
Action:
Expand Down

0 comments on commit 9ec574b

Please sign in to comment.