Skip to content

Commit 8e727cf

Browse files
committed
release
0 parents  commit 8e727cf

25 files changed

+2355
-0
lines changed

.gitignore

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
*.js
2+
!jest.config.js
3+
*.d.ts
4+
node_modules
5+
package-lock.json
6+
7+
# CDK asset staging directory
8+
.cdk.staging
9+
cdk.out
10+
11+
# Parcel build directories
12+
.cache
13+
.build
14+
15+
cdk.json
16+
cdk.context.json

.npmignore

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
*.ts
2+
!*.d.ts
3+
4+
# CDK asset staging directory
5+
.cdk.staging
6+
cdk.out

CHANGELOG.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Changelog
2+
3+
All notable changes to this project will be documented in this file.
4+
5+
This project follows [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) format for changes and adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
6+
7+
8+
9+
## [Unreleased]
10+
11+
## [1.0.0] - 2022-05-31
12+
13+
### Added
14+
15+
* Initial release

CODE_OF_CONDUCT.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
## Code of Conduct
2+
This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
3+
For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
4+
[email protected] with any additional questions or comments.

CONTRIBUTING.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# Contributing Guidelines
2+
3+
Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
4+
documentation, we greatly value feedback and contributions from our community.
5+
6+
Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
7+
information to effectively respond to your bug report or contribution.
8+
9+
10+
## Reporting Bugs/Feature Requests
11+
12+
We welcome you to use the GitHub issue tracker to report bugs or suggest features.
13+
14+
When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already
15+
reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
16+
17+
* A reproducible test case or series of steps
18+
* The version of our code being used
19+
* Any modifications you've made relevant to the bug
20+
* Anything unusual about your environment or deployment
21+
22+
23+
## Contributing via Pull Requests
24+
Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
25+
26+
1. You are working against the latest source on the *main* branch.
27+
2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
28+
3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
29+
30+
To send us a pull request, please:
31+
32+
1. Fork the repository.
33+
2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
34+
3. Ensure local tests pass.
35+
4. Commit to your fork using clear commit messages.
36+
5. Send us a pull request, answering any default questions in the pull request interface.
37+
6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
38+
39+
GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
40+
[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
41+
42+
43+
## Finding contributions to work on
44+
Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
45+
46+
47+
## Code of Conduct
48+
This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
49+
For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
50+
[email protected] with any additional questions or comments.
51+
52+
53+
## Security issue notifications
54+
If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
55+
56+
57+
## Licensing
58+
59+
See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.

LICENSE

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
2+
3+
Permission is hereby granted, free of charge, to any person obtaining a copy of
4+
this software and associated documentation files (the "Software"), to deal in
5+
the Software without restriction, including without limitation the rights to
6+
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
7+
the Software, and to permit persons to whom the Software is furnished to do so.
8+
9+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
10+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
11+
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
12+
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
13+
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
14+
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
15+

NOTICE.txt

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
2+
3+
**********************
4+
THIRD PARTY COMPONENTS
5+
**********************
6+
This software includes third party software subject to the following copyrights:
7+
8+
AWS SDK under the Apache License Version 2.0
9+
aws-cdk under Apache Software License
10+
constructs under Apache Software License
11+
cdk-nag under Apache Software License

README.es.md

Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
# AWS Backup Actions Framework
2+
3+
AWS Backup Actions es un framework para automatizar acciones gatilladas por events de AWS Backup
4+
5+
Esta solución incluye implementaciones de muestra para exportar volumenes AWS EBS a archivos comprimidos para archivarlos a largo plazo en Amazon S3 y exportar backups de Amazon DynamoDB y snapshots de Amazon RDS con [motores y versiones][1] y [versiones de Aurora][2] que soportan exportar snapshots a S3 en el formato Parquet para archivos consultables a largo plazo. Puede implementar otros casos de uso como exportar dumps nativos de snapshots de RDS según el ejemplo de EBS.
6+
7+
NOTA: Esta aplicación creará roles y políticas de IAM.
8+
9+
## ¿Cómo funciona?
10+
Cualquier snapshot creada en el AWS Backup Vault designado activará un proceso para restaurar el backup, copiar los datos a
11+
S3 y eliminar el recurso restaurado. La solución elimina el backup solo en caso de éxito para que la retención de AWS Backup pueda seguir preservando los datos en caso de fallo.
12+
13+
La solución utiliza AWS Step Functions para orquestar los procesos. AWS Lambda y AWS Batch con instancias Amazon EC2 Spot realizan el
14+
procesos de restauración y backup.
15+
16+
### EBS
17+
1. Restaura la snapshot a un volumen GP2 en una AZ determinada. Espera a que esté disponible.
18+
2. Inicia un Batch Job en la misma AZ.
19+
3. El Batch Job conecta el volumen de EBS a la instancia del contenedor de una manera que permite que el contenedor que se ejecuta como root acceda y monte el dispositivo de bloque.
20+
4. Los archivos se archivan y comprimen mediante tar y se transmiten a S3 por streaming.
21+
5. Si por alguna razón el sistema de archivos en el volumen de EBS no se puede montar, el dispositivo de bloque se copia con dd, se comprime con gzip y se transmite a S3 por streaming.
22+
6. El volumen restaurado se elimina después de éxito o cualquier error.
23+
24+
### DynamoDB y motores y versiones soportados de RDS
25+
1. Llama al API para exportar el snapshot a S3 en el formato comprimido Parquet.
26+
2. Monitorea la tarea hasta éxito o fallo.
27+
28+
### Como implementar soporte para otros motores de RDS
29+
1. Restaura la snapshot a una AZ dada en un tipo de instancia de bajo costo con volúmenes GP2 o Aurora con una contraseña root aleatoria.
30+
2. Almacena la contraseña cifrada en SSM Parameter Store.
31+
3. Inicia un Batch Job en la misma AZ.
32+
4. El Batch Job se conecta a la base de datos y ejecuta el comando dump del motor, comprime con gzip y transmite a S3 por streaming.
33+
5. La instancia restaurada se elimina después de éxito o cualquier error.
34+
35+
## Costos
36+
Aparte del almacenamiento en S3 y VPC Interface Endpoints, esta solución solo genera costos mientras procesa una snapshot.
37+
38+
Suponiendo que la fuente de datos original era de 100GB, el costo por exportación excluyendo almacenamiento y VPC Interface Endpoints sigue:
39+
EBS: ~$0.65
40+
RDS: ~$1.05
41+
DynamoDB: ~$10.05
42+
43+
Los siete VPC Interface Endpoints son el costo más alto de esta solución en unos $151 mensuales. El tráfico a internet es sólo para llamadas API a EC2, ECR, y Batch. El tráfico de S3 y DynamoDB utilizan los VPC Gateway Endpoints. Nada en la solución escucha el tráfico entrante. Podría usar un VPC NAT Gateway por unos $33 por mes, pero el tráfico de egreso no es controlado. A su propio riesgo, esta solución puede funcionar sin un NAT Gateway o VPC Interface Endpoints, pero las instancias EC2 administradas por AWS Batch requerirán acceso directo a Internet y direcciones IP públicas. El Security Group puede impedir el acceso entrante desde Internet, y no se abren puertos para el tráfico entrante.
44+
45+
## Instrucciones de despliegue
46+
```
47+
cp cdk.json.template cdk.json
48+
```
49+
50+
Edite cdk.json para especificar su cuenta, región, BackupVault y etiquetas. La etiqueta de seguridad es opcional para
51+
restringir los roles de IAM creados eliminar recursos que no fueron creados por esta aplicación.
52+
53+
```
54+
npm install
55+
cd functions/sanpshotMetadata
56+
npm install
57+
cd ../..
58+
npm run build
59+
```
60+
61+
Configure su entorno con AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY y posiblemente AWS_SESSION_TOKEN para poder
62+
implementar en su cuenta.
63+
64+
```
65+
cdk synth
66+
cdk deploy
67+
```
68+
69+
### S3 Server Access Logs (Opcional)
70+
Puede habilitar S3 Server Access Logs por especificar un bucket y prefijo en `cdk.json`. El bucket para los access logs debe se configurado para [permitir acceso desde el Amazon S3 Log Delivery Group][3]
71+
72+
### S3 Lifecycle Rules (Opcional)
73+
Para unos caso de uso como archivar a largo plazo, los objetos solo deberían ser eliminados con Lifecycle Rules. Considere restringir opseraciones de delete con MFA Delete en la pólitica del bucket.
74+
75+
Puede configurar las Lifecycle Rules en el `cdk.json` bajo la clave `lifecycleRules`. Por ejemplo:
76+
77+
```
78+
"lifecycleRules: {
79+
"glacier": 10,
80+
"deepArchive": 101,
81+
"expiration": 2190
82+
}
83+
```
84+
85+
[1]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html
86+
[2]: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_ExportSnapshot.html
87+
[3]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html#grant-log-delivery-permissions-general

README.md

Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
# AWS Backup Actions Framework
2+
3+
AWS Backup Actions is a framework for automating actions triggered by AWS Backup events.
4+
5+
This solution includes example implementations to export AWS EBS snapshots to compressed archives for long term archiving in Amazon S3 and exporting Amazon DynamoDB backups and Amazon RDS snapshots for supported [engines and versions][1] and [Aurora versions][2] to S3 in the Parquet format for queryable long term archiving. You can implement other use cases such as exporting engine native dumps from RDS snapshots by following the EBS example.
6+
7+
NOTE: This application will create IAM roles and policies.
8+
9+
## How does it work?
10+
Any snapshot created in the designated AWS Backup Vault will trigger a process to restore the backup, copy the data to
11+
S3, and delete the restored resource. The solution deletes the snapshot only on success so the AWS Backup retention can be used as a failsafe.
12+
13+
The solution uses AWS Step Functions to orchestrate the processes. AWS Lambda and AWS Batch with EC2 Spot Instances perform the
14+
restore and backup processes.
15+
16+
### EBS
17+
1. Restore the snapshot to a GP3 volume in a given AZ. Wait for it to become available.
18+
2. Start a Batch job in the same AZ.
19+
3. The Batch job attaches the EBS volume to the container instance in a way that allows the container running as root to access and mount the block device.
20+
4. The files are archived and compressed using tar and streamed to S3.
21+
5. If for any reason the filesystem on the EBS volume can't be mounted, the block device is copied with dd, compressed with gzip, and streamed to S3.
22+
6. The restored volume is deleted after success or any failure.
23+
24+
### DynamoDB and supported RDS engines and versions
25+
1. Call the API to export the snapshot in S3 in compressed Parquet format.
26+
2. Monitor for success.
27+
28+
### How to implement support for other RDS engines
29+
1. Restore the snapshot to a given AZ on a low cost instance type with GP2 volumes or Aurora with a random root password.
30+
2. Store the password encrypted in SSM Parameter Store.
31+
3. Start a Batch job in the same AZ.
32+
4. The Batch job connects to the DB and runs the engine's dump command, compresses with gzip, and stream to S3.
33+
5. The restored instance is terminated after success or any failure.
34+
35+
## Costs
36+
Apart from the storage in S3 and the VPC Interface Endpoints, this solution only costs money while it is processing a snapshot.
37+
38+
Assuming the original data source was 100GB, the cost per export excluding storage and VPC Interface Endpoints follows:
39+
EBS: ~$0.65
40+
RDS: ~$1.05
41+
DynamoDB: ~$10.05
42+
43+
The seven VPC Interface Endpoints are the highest cost of this solution at about $151 per month. The traffic outside the VPC is only for API calls to EC2, ECR, and Batch. S3 and DynamoDB traffic use VPC Gateway endpoints. Nothing in the solution listens for inbound traffic. A VPC NAT Gateway could be used for about $33 per month, but egress is not controlled. At your own risk, this solution can work without a NAT Gateway or VPC Interface Endpoints, but the EC2 instances managed by AWS Batch will require direct access to the Internet and public IP addresses. The Security Group can prevent inbound access from the Internet, and no ports get opened for inbound traffic.
44+
45+
## Deployment instructions
46+
```
47+
cp cdk.json.template cdk.json
48+
```
49+
50+
Edit cdk.json to specify your account, region, backupVault, and tags. The Security Tag is optional to restrict the created IAM Roles
51+
from deleting resources that weren't created by this application.
52+
53+
```
54+
npm install
55+
cd functions/sanpshotMetadata
56+
npm install
57+
cd ../..
58+
npm run build
59+
```
60+
61+
Set up your environment with AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and possibly AWS_SESSION_TOKEN to be able to
62+
deploy in your account.
63+
64+
```
65+
cdk synth
66+
cdk deploy
67+
```
68+
69+
### S3 Server Access Logs (Optional)
70+
You can enable S3 Server Access Logs by specifying a bucket and prefix in `cdk.json`. The access logs bucket must be configured to [allow access from the Amazon S3 Log Delivery Group][3].
71+
72+
### S3 Lifecycle Rules (Optional)
73+
For some use cases such as long term archiving, objects should only be deleted using Lifecycle Rules. Consider restricting deletes with MFA delete in the bucket policy.
74+
75+
You can configure the Lifecycle Rules in the `cdk.json` under the `lifecycleRules` key. For example:
76+
77+
```
78+
"lifecycleRules: {
79+
"glacier": 10,
80+
"deepArchive": 101,
81+
"expiration": 2190
82+
}
83+
```
84+
85+
[1]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html
86+
[2]: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_ExportSnapshot.html
87+
[3]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html#grant-log-delivery-permissions-general

bin/aws-backup-actions.ts

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
#!/usr/bin/env node
2+
3+
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
4+
// SPDX-License-Identifier: MIT-0
5+
6+
import "source-map-support/register";
7+
import * as cdk from "aws-cdk-lib";
8+
import { AwsBackupActionsStack } from "../lib/aws-backup-actions-stack";
9+
import { AwsSolutionsChecks, NagSuppressions } from "cdk-nag";
10+
11+
const app = new cdk.App();
12+
const stack = new AwsBackupActionsStack(app, "AwsBackupActionsStack", {
13+
env: {
14+
account: app.node.tryGetContext("account"),
15+
region: app.node.tryGetContext("region"),
16+
},
17+
});
18+
cdk.Aspects.of(app).add(
19+
new AwsSolutionsChecks({
20+
verbose: true,
21+
})
22+
);
23+
24+
NagSuppressions.addStackSuppressions(
25+
stack,
26+
[
27+
{
28+
id: "AwsSolutions-IAM5",
29+
reason: "Allow to Lambdas write to Cloudwatch Log Group under prefix",
30+
appliesTo: [
31+
{
32+
regex:
33+
"/^Resource::arn:<AWS::Partition>:logs:(.*):log-group:/aws/lambda/\\*/g",
34+
},
35+
],
36+
},
37+
],
38+
true
39+
);

0 commit comments

Comments
 (0)