From 112dd6cf68c8c892f19d87fd32db5e132dbec04f Mon Sep 17 00:00:00 2001 From: Simon Kok Date: Fri, 7 Jun 2024 23:40:40 +0200 Subject: [PATCH] v4.0.0 This is a security-focused release of the AWS Deployment Framework (ADF) that aims to restrict the default access required and provided by ADF via the least-privilege principle. __Key security enhancements include:__ - Applying IAM best practices by restricting excessive permissions granted to IAM roles and policies used by ADF. - Leveraging new IAM features to further limit access privileges granted by default, reducing the potential attack surface. - Where privileged access is required for specific ADF use cases, the scope and duration of elevated privileges have been minimized to limit the associated risks. By implementing these security improvements, ADF now follows the principle of least privilege, reducing the risk of unauthorized access or privilege-escalation attacks. Please make sure to go through the list of changes breaking changes carefully. As with every release, it is strongly recommended to thoroughly review and test this version of ADF in a non-production environment first. ### Breaking changes #### Security: Confused Deputy Problem Addressed the [Confused Deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html) in IAM roles created by ADF to use by the AWS Services. Where supported, the roles are restricted to specific resources via an `aws:SourceArn` condition. If you were using the ADF roles for other resources or use cases not covered by ADF, you might need to patch the Assume Role policies accordingly. #### Security: Cross-Account Access Role and the new Jump Role ADF relies on the privileged Cross-Account Access Role to bootstrap accounts. In the past, ADF used this role for every update and deployment of the bootstrap stacks, as well as account management features. With the release of v4.0, a jump role is introduced to lock-down the usage of the privileged cross-account access role. Part of the bootstrap stack, the `adf-bootstrap-update-deployment-role` is created. This role grants access to perform restricted updates that are frequently performed via the `aws-deployment-framework-bootstrap` pipeline. By default, the jump role is granted access to assume into this update deployment role. A dedicated jump role manager is responsible to grant the jump role access to the cross-account access role for AWS accounts where ADF requires access and the `adf-bootstrap-update-deployment-role` is not available yet. For example, accounts that are newly created only have the cross-account access role to assume into. Same holds for ADF managed accounts that are not updated to the new v4.0 bootstrap stack yet. During the installation/update of ADF, a new parameter enables you to grant the jump role temporary access to the cross-account access role as an privileged escalation path. This parameter is called `GrantOrgWidePrivilegedBootstrapAccessUntil`. By setting this to a date/time in the future you will grant access to the cross-account access role until that date/time. This would be required if you modify ADF itself or the bootstrap stack templates. Changing permissions like the `adf-cloudformation-deployment-role` is possible without relying on the cross-account access role. For most changes deployed via the bootstrap pipeline it does not require elevated privileged access to update. With the above changes, the `aws-deployment-framework-bootstrap` CodeBuild project no longer has unrestricted access to the privileged cross-account role. Starting from version 4.0, access to assume the privileged cross-account access role is restricted and must be obtained through the Jump Role as described above. #### Security: Restricted account management access Account Management is able to access non-protected organization units. Prior to ADF v4.0, the account management process used the privileged cross-account assess role to operate. Hence it could move an account or update the properties of an account that is located in a protected organization unit too. With the release of v4.0, it is only able to move or manage accounts if they are accessible via the Jump Role. The Jump Role is restricted to non-protected organization units only. This enhances the security of ADF, as defining a organization unit as protected will block access to that via the Jump Role accordingly. #### Security: Restricted bootstrapping of management account The `adf-global-base-adf-build` stack in the management account was initially deployed to facilitate bootstrap access to the management account. It accomplished this by creating a cross-account access role with limited permissions in the management account ahead of the bootstrapping process. ADF created this role as it is not provisioned by AWS Organizations or AWS Control Tower in the management account itself. However, ADF required some level of access to deploy the necessary bootstrap stacks when needed. It is important to note that deploying this role and bootstrapping the management account introduces a potential risk. A pipeline created via a deployment map could target the management account and create resources within it, which may have unintended consequences. To mitigate the potential risk, it is recommended to implement strict least-privilege policies and apply permission boundaries to protect the management account. Additionally, thoroughly reviewing all deployment map changes is crucial to ensure no unintended access is granted to the management account. With the release of ADF v4.0, the `adf-global-base-adf-build` stack is removed and its resources are moved to the main ADF CloudFormation template. These resources will only get deployed if the new `AllowBootstrappingOfManagementAccount` parameter is set to `Yes`. By default it will not allow bootstrapping of the management account. #### Security: Restricted bootstrapping of deployment account Considering the sensitive workloads that run in the deployment account, it is important to limit the permissions granted for pipelines to deploy to the deployment account itself. You should consider the deployment account a production account. It is recommended to apply the least-privilege principle and only allow pipelines to deploy resources that are required in the deployment account. Follow these steps after the changes introduced by the ADF v4.0 release are applied in the main branch of the `aws-deployment-framework-bootstrap` repository. Please take this moment to review the following: * Navigate to the `adf-boostrap/deployment` folder in that repository. * Check if it contains a `global-iam.yml` file: * If it does __not__ contain a `global-iam.yml` file yet, please ensure you copy the `example-global-iam.yml` file in that directory. * If it does, please compare it against the `example-global-iam.yml` file in that directory. * Apply the least-privilege principle on the permissions you grant in the deployment account. #### Security: Shared Modules Bucket ADF uses the Shared Modules Bucket as hosted in the management account in the main deployment region to share artifacts from the `aws-deployment-framework-bootstrap` repository. The breaking change enforces all objects to be owned by the bucket owner from v4.0 onward. #### Security: ADF Role policy restrictions With the v4.0 release, all ADF roles and policies were reviewed, applying the latest best-practices and granting access to ADF resources only where required. This review also includes the roles that were used by the pipelines generated by ADF. Please be aware of the changes made to the following roles: ##### adf-codecommit-role The `adf-codecommit-role` no longer grants read/write access to all buckets. It only grants access to the buckets created and managed by ADF where it needed to. Please grant access accordingly if you use custom S3 buckets or need to copy from an S3 bucket in an ADF-generated pipeline. ##### adf-codebuild-role The `adf-codebuild-role` can only be used by CodeBuild projects in the main deployment region. ADF did not allow running CodeBuild projects in other regions before. But in case you manually configured the role in a project in a different region it will fail to launch. The `adf-codebuild-role` is no longer allowed to assume any IAM Role in the target accounts if those roles would grant access in the Assume Role Policy Document. The `adf-codebuild-role` is restricted to assume only the `adf-readonly-automation-role` roles in the target accounts. And, in the case that the Terraform ADF Extension is enabled, it is allowed to assume the `adf-terraform-role` too. It is therefore not allowed to assume the `adf-cloudformation-deployment-role` any longer. If you were deploying with `cdk deploy` into target accounts from an ADF pipeline you will need to specifically grant the `adf-codebuild-role` access to assume the `adf-cloudformation-deployment-role`. However, we strongly recommend you synthesize the templates instead and let AWS CloudFormation do the deployment for you. For Terraform support, CodeBuild was granted access to the `adf-tflocktable` table in release v3.2.0. This access is restricted to only grant read/write access to that table if the Terraform extension is enabled. Please bear in mind that if you enable Terraform access the first time, you will need to use the `GrantOrgWidePrivilegedBootstrapAccessUntil` parameter if ADF v4.0 bootstrapped to accounts before. As this operation requires privileged access. The `adf-codebuild-role` is allowed to assume into the `adf-terraform-role` if the Terraform extension is enabled. As written in the docs, the `adf-terraform-role` is configured in the `global-iam.yml` file. This role is commented out by default. When you define this role, it is important to make sure to grant it [least-privilege access](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) only. ##### adf-cloudformation-role The `adf-cloudformation-role` is no longer assumable by CloudFormation. This role is used by CodePipeline to orchestrate various deployment actions across accounts. For example, CodeDeploy, S3, and obviously the CloudFormation actions. For CloudFormation, it would instruct the service to use the CloudFormation Deployment role for the actual deployment. The CloudFormation deployment role is the role that is assumed by the CloudFormation service. This change should not impact you, unless you use this role in relation with CloudFormation that is not managed by ADF. With v4.0, the `adf-cloudformation-role` is only allowed to pass the CloudFormation Deployment role to CloudFormation and no other roles to other services. If you were/want to make use of a custom CloudFormation deployment role for specific pipelines, you need to make sure that the `adf-cloudformation-role` is allowed to perform an `iam:PassRole` action with the given role. It is recommended to limit this to be passed to the CloudFormation service only. You can find an example of this in the `adf-bootstrap/deployment/global.yml` file where it allows the CloudFormation role to perform `iam:PassRole` with the `adf-cloudformation-deployment-role`. When required, please grant this access in the `adf-bootstrap/deployment/global-iam.yml` file in the `aws-deployment-framework-bootstrap` repository. Additionally, the `adf-cloudformation-role` is not allowed to access S3 buckets except the ADF buckets it needs to transfer pipeline assets to CloudFormation. ##### adf-codepipeline-role The `adf-codepipeline-role` is no longer assumable by CloudFormation, CodeDeploy, and S3. The role itself was not passed to any of these services by ADF. If you relied on the permissions that were removed, feel free to extend the role permissions via the `global-iam.yml` stack. #### Security: Restricted access to ADF-managed S3 buckets only With v4.0, access is restricted to ADF-managed S3 buckets only. If a pipeline used the S3 source or deployment provider, it will require the required access to those buckets. Please add the required access to the `global-iam.yml` bootstrap stack in the OU where it is hosted. Grant read access to the `adf-codecommit-role` for S3 source buckets. Grant write access to the `adf-cloudformation-role` for S3 buckets an ADF pipeline deploys to. #### Security: Bootstrap stack no longer named after organization unit The global and regional bootstrap stacks are renamed to `adf-global-base-bootstrap` and `adf-regional-base-bootstrap` respectively. In prior releases of ADF, the name ended with the organization unit name. As a result, an account could not move from one organization unit to another without first removing the bootstrap stacks. Additionally, it made writing IAM policies and SCPs harder in a least-privilege way. When ADF v4.0 is installed, the legacy stacks will get removed by the `aws-deployment-framework-bootstrap` pipeline automatically. Shortly after removal, it will deploy the new bootstrap stacks. With v4.0, accounts can move from one organization unit to another, without requiring the removal of the ADF bootstrap stacks. #### Security: KMS Encryption required on Deployment Account Pipeline Buckets The deployment account pipeline buckets only accepts KMS Encrypted objects from v4.0 onward. Ensuring that all objects are encrypted with the same KMS Key. Before, some objects used KMS encryption while others did not. The bucket policy now requires all objects to be encrypted via the KMS key. All ADF components have been adjusted to upload with this key. If, however, you copy files from systems that are not managed by ADF, you will need to adjust these to encrypt the objects with the KMS key as well. #### Security: TLS Encryption required on all ADF-managed buckets S3 Buckets created by ADF will require TLS 1.2 or later. All actions that occur on these buckets with older TLS versions will be denied via the bucket policies that these buckets received. #### New installer The dependencies that are bundled by the move to the AWS Cloud Development Kit (CDK) v2 increased the deployment size of ADF. Unfortunately it increased the deployment size beyond the limit that is supported by the Serverless Application Repository (SAR). Hence a new installation mechanism is required. Please read the [installation instructions](https://github.com/awslabs/aws-deployment-framework/blob/master/docs/installation-guide.md) carefully. In case you are upgrading an existing installation of ADF, please consider following the [upgrade steps as defined in the admin guide](https://github.com/awslabs/aws-deployment-framework/blob/master/docs/admin-guide.md#updating-between-versions). #### CDK v2 ADF v4.0 is built on the AWS Cloud Development Kit (CDK) v2. Which is an upgrade to CDK v1 that ADF relied on before. For most end-users, this change would not have an immediate impact. If, however, you made customizations to ADF it might require you to upgrade these customizations to CDK v2 as well. #### CodeBuild default image As written in the [CodeBuild provider docs](./docs/providers-guide.md#properties-3), it is a best-practice to define the exact CodeBuild container image you would like to use for each pipeline. However, in case you rely on the default, in prior ADF releases it would default to `UBUNTU_14_04_PYTHON_3_7_1`. This container image is no longer supported. With ADF v4.0, the new default is `STANDARD_7_0`. Also referred to as: `aws/codebuild/standard:7.0`. #### ADF Renaming of Roles ADF v4.0 changes most of the roles that it relies on. The reason for this change is to make it easier to secure ADF with Service Control Policies and IAM permission boundaries. Where applicable, the roles received a new prefix. This makes it easier to identify what part of ADF relies on those roles and whom should have access to assume the role or modify it. | Previous prefix | Previous name | New prefix | New name | |------------------|---------------------------------------------------------------------|----------------------------|---------------------------------------------------------------| | / | ${CrossAccountAccessRoleName}-readonly | /adf/organizations/ | adf-organizations-readonly | | / | adf-update-cross-account-access-role | /adf/bootstrap/ | adf-update-cross-account-access | | /adf-automation/ | adf-create-repository-role | /adf/pipeline-management/ | adf-pipeline-management-create-repository | | /adf-automation/ | adf-pipeline-provisioner-generate-inputs | /adf/pipeline-management/ | adf-pipeline-management-generate-inputs | | /adf-automation/ | adf-pipeline-create-update-rule | /adf/pipeline-management/ | adf-pipeline-management-create-update-rule | | / | adf-event-rule-${AWS::AccountId}-${DeploymentAccountId}-EventRole-* | /adf/cross-account-events/ | adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId} | |------------------|---------------------------------------------------------------------|----------------------------|---------------------------------------------------------------| #### ADF Renaming of Resources | Type | Previous name | New name | |--------------|-----------------------------------------------|--------------------------------------------------------| | StateMachine | EnableCrossAccountAccess | adf-bootstrap-enable-cross-account | | StateMachine | ADFPipelineManagementStateMachine | adf-pipeline-management | | StateMachine | PipelineDeletionStateMachine-* | adf-pipeline-management-delete-outdated | | Lambda | DeploymentMapProcessorFunction | adf-pipeline-management-deployment-map-processor | | Lambda | ADFPipelineCreateOrUpdateRuleFunction | adf-pipeline-management-create-update-rule | | Lambda | ADFPipelineCreateRepositoryFunction | adf-pipeline-management-create-repository | | Lambda | ADFPipelineGenerateInputsFunction | adf-pipeline-management-generate-pipeline-inputs | | Lambda | ADFPipelineStoreDefinitionFunction | adf-pipeline-management-store-pipeline-definition | | Lambda | ADFPipelineIdentifyOutOfDatePipelinesFunction | adf-pipeline-management-identify-out-of-date-pipelines | |--------------|-----------------------------------------------|--------------------------------------------------------| #### ADF Parameters in AWS Systems Manager Parameter Store Some of the parameters stored by ADF in AWS Systems Manager Parameter Store were located at the root of the Parameter Store. This made it hard to maintain and restrict access to the limited set of ADF specific parameters. With ADF v4.0, the parameters used by ADF are located under the `/adf/` prefix. For example, `/adf/deployment_account_id`. The `global-iam.yml` bootstrap stack templates get copied from their `example-global-iam.yml` counterparts. When this was copied in v3.2.0, the default path for the `deployment_account_id` parameter should be updated to `/adf/deployment_account_id`. Please apply this new default value to the CloudFormation templates accordingly. If you forget to do this, the stack deployment of the `adf-global-base-iam` stack might fail with a failure stating that it does not have permission to fetch the `deployment_account_id` parameter. The error you run into if the parameter path is not updated: > An error occurred (ValidationError) when calling the CreateChangeSet > operation: User: > arn:aws:sts::111111111111:assumed-role/${CrossAccountAccessRoleName}/base_update > is not authorized to perform: ssm:GetParameters on resource: > arn:aws:ssm:${deployment_region}:111111111111:parameter/deployment_account_id > because no identity-based policy allows the ssm:GetParameters action > (Service: AWSSimpleSystemsManagement; Status Code: 400; > Error Code: AccessDeniedException; Request ID: xxx). If an application or customization to ADF relies on one of these parameters they will need to be updated to include this prefix. Unless the application code relies on ADF's ParameterStore class, in that case it will automatically prefix the `/adf/` to all parameters read or written. With the changes in the IAM policies, ADF's access is restricted to the `/adf/` prefix. This, unfortunately implies that old parameters are not deleted when you update your installation of ADF. There is no cost associated to these parameters, so you can leave them as is. Feel free to delete the old parameters. The parameters that are managed by ADF that got their path changed are: For the __management account__, in the __AWS Organizations region__ (`us-east-1`, or `us-gov-west-1`): | Old Parameter Path | New Parameter Path | |------------------------------|----------------------------------| | `/adf_log_level` | `/adf/adf_log_level` | | `/adf_version` | `/adf/adf_version` | | `/bucket_name` | `/adf/bucket_name` | | `/confit` | `/adf/config` | | `/cross_account_access_role` | `/adf/cross_account_access_role` | | `/deployment_account_id` | `/adf/deployment_account_id` | | `/deployment_account_region` | `/adf/deployment_account_region` | | `/kms_arn` | `/adf/kms_arn` | | `/notification_channel` | `/adf/notification_channel` | | `/organization_id` | `/adf/organization_id` | | `/protected` | `/adf/protected` | | `/scp` | `/adf/scp` | | `/shared_modules_bucket` | `/adf/shared_modules_bucket` | | `/tagging-policy` | `/adf/tagging_policy` | | `/target_regions` | `/adf/target_regions` | For the __management account__, in __other ADF regions__: | Old Parameter Path | New Parameter Path | |------------------------------|----------------------------------| | `/adf_version` | `/adf/adf_version` | | `/bucket_name` | `/adf/bucket_name` | | `/cross_account_access_role` | `/adf/cross_account_access_role` | | `/deployment_account_id` | `/adf/deployment_account_id` | | `/kms_arn` | `/adf/kms_arn` | For the __deployment account__, in __the deployment region__: | Old Parameter Path | New Parameter Path | |------------------------------|-------------------------------------| | `/adf_log_level` | `/adf/adf_log_level` | | `/adf_version` | `/adf/adf_version` | | `/auto_create_repositories` | `/adf/scm/auto_create_repositories` | | `/cross_account_access_role` | `/adf/cross_account_access_role` | | `/default_scm_branch` | `/adf/scm//default_scm_branch` | | `/deployment_account_bucket` | `/adf/shared_modules_bucket` | | `/master_account_id` | `/adf/management_account_id` | | `/notification_endpoint` | `/adf/notification_endpoint` | | `/notification_type` | `/adf/notification_type` | | `/organization_id` | `/adf/organization_id` | For the __deployment account__, in __other ADF regions__: | Old Parameter Path | New Parameter Path | |------------------------------|----------------------------------| | `/adf_log_level` | `/adf/adf_log_level` | | `/adf_version` | `/adf/adf_version` | | `/cross_account_access_role` | `/adf/cross_account_access_role` | | `/deployment_account_bucket` | `/adf/shared_modules_bucket` | | `/master_account_id` | `/adf/management_account_id` | | `/notification_endpoint` | `/adf/notification_endpoint` | | `/notification_type` | `/adf/notification_type` | | `/organization_id` | `/adf/organization_id` | For a __target account__, in __each ADF region__: | Old Parameter Path | New Parameter Path | |--------------------------|------------------------------| | `/bucket_name` | `/adf/bucket_name` | | `/deployment_account_id` | `/adf/deployment_account_id` | | `/kms_arn` | `/adf/kms_arn` | #### AWS CodeStar Connections OAuth Token support dropped ADF v4.0 discontinued the support for the OAuth Token stored in SSM Parameter Store. As this method is not advised to be used by CodePipeline, and might leave the OAuth Token accessible to other users of the deployment account. As this is not a security best practice, ADF v4.0 no longer supports it. To upgrade, please read the [Administrator Guide on Using AWS CodeConnections for Bitbucket, GitHub, or GitLab](./docs/admin-guide.md#using-aws-codeconnections-for-bitbucket-github-github-enterprise-or-gitlab). #### AWS CodeStar Connections changed to AWS CodeConnections The AWS CodeStar Connection service [changed its name to AWS CodeConnections](https://docs.aws.amazon.com/dtconsole/latest/userguide/rename.html). If you configured a CodeStar Connection before, you can continue to use that. You do not need to update the CodeStar policy as defined in the `aws-deployment-framework-bootstrap/adf-bootstrap/deployment/global-iam.yml` stack. However, please update the pipeline definitions in your deployment map files. The changes you need to make are renaming the source provider from `codestar` to `codeconnections`. Also update the `codestar_connection_path` source property to `codeconnections_param_path`. Both of these changes can be seen in the following example: ```yaml pipelines: - name: sample-vpc default_providers: source: # provider: codestar provider: codeconnections properties: # codestar_connection_path: /adf/my_connection_arn_param codeconnections_param_path: /adf/my_connection_arn_param ``` If you are upgrading from the GitHub OAuth token or otherwise require a new source code connection, please proceed with the AWS CodeConnections configuration as defined in the [Admin Guide - Using AWS CodeConnections for Bitbucket, GitHub, or GitLab](./docs/admin-guide.md#using-aws-codeconnections-for-bitbucket-github-or-gitlab). ### Features - Update CDK from v1 to v2 (#619), by @pergardebrink, resolves #503, #614, and #617. - Account Management State Machine will now opt-in to target regions when creating an account (#604) by @StewartW. - Add support for nested organization unit targets (#538) by @StewartW, resolves #20. - Enable single ADF bootstrap and pipeline repositories to multi-AWS Organization setup, resolves #410: - Introduce the org-stage (#636) by @AndyEfaa. - Add support to allow empty targets in deployment maps (#634) by @AndyEfaa. - Add support to define the "default-scm-codecommit-account-id" in adfconfig.yml, no value in either falls back to deployment account id (#633) by @AndyEfaa. - Add multi AWS Organization support to adfconfig.yml (#668) by @alexevansigg. - Add multi AWS Organization support to generate_params.py (#672) by @AndyEfaa. - Terraform: add support for distinct variable files per region per account in Terraform pipelines (#662) by @igordust, resolves #661. - CodeBuild environment agnostic custom images references, allowing to specify the repository name or ARN of the ECR repository to use (#623) by @abhi1094. - Add kms_encryption_key_arn and cache_control parameters to S3 deploy provider (#669) by @alFReD-NSH. - Allow inter-ou move of accounts (#712) by @sbkok. ### Fixes - Fix Terraform terrascan failure due to incorrect curl call (#607), by @lasv-az. - Fix custom pipeline type configuration not loaded (#612), by @lydialim. - Fix Terraform module execution error (#600), by @stemons, resolves #599 and #602. - Fix resource untagging permissions (#635) by @sbkok. - Fix GitHub Pipeline secret token usage (#645) by @sbkok. - Fix Terraform error masking by tee (#643) by @igordust, resolves #642. - Fix create repository bug when in rollback complete state (#648) by @alexevansigg. - Fix cleanup of parameters upon pipeline retirement (#652) by @sbkok. - Fix wave calculation for non-default CloudFormation actions and multi-region deployments (#624 and #651), by @alexevansigg. - Fix ChatBot channel ref + add notification management permissions (#650) by @sbkok. - Improve docs and add CodeStar Connection policy (#649) by @sbkok. - Fix Terraform account variables were not copied correctly (#665) by @donnyDonowitz, resolves #664. - Fix pipeline management state machine error handling (#683) by @sbkok. - Fix target schema for tags (#667) by @AndyEfaa. - Fix avoid overwriting truncated pipeline definitions with pipelines that share the same start (#653) by @AndyEfaa. - Fix updating old global-iam stacks in the deployment account (#711) by @sbkok. - Remove default org-stage reference to dev (#717) by @alexevansigg. - Fix racing condition on first-usage of ADF pipelines leading to an auth error (#732) by @sbkok. - Fix support for custom S3 deployment roles (#732) by @sbkok, resolves #355. - Fix pipeline completion trigger description (#734) by @sbkok, resolves #654. ### Improvements - Sanitizing account names before using them in SFn Invocation (#598) by @StewartW, resolves #597. - Improve Terraform documentation sample (#605), by @lasv-az. - Fix CodeDeploy sample to work in gov-cloud (#609), by @sbkok. - Fix documentation error on CodeBuild custom image (#622), by @abhi1094. - Speedup bootstrap pipeline by removing unused SAM Build (#613), by @AlexMackechnie. - Upgrade CDK (v2.88), SAM (v1.93), and others to latest compatible version (#647) by @sbkok, resolves #644. - Update pip before installing dependencies (#606) by @lasv-az. - Fix: Adding hash to pipelines processing step function execution names to prevent collisions (#641) by @avolip, resolves #640. - Modify trust relations for roles to ease redeployment of roles (#526) by @AndreasAugustin, resolves #472. - Limit adf-state-machine-role to what is needed (#657) by @alFReD-NSH. - Upload SCP policies with spaces removed (#656) by @alFReD-NSH. - Move from ACL enforced bucket ownership to Ownership Controls + MegaLinter prettier fix (#666) by @sbkok. - Upgrade CDK (v2.119), SAM (v1.107), Jinja2 (v3.1.3), and others to latest compatible version (#676) by @sbkok. - Fix initial value type of allow-empty-targets (#678) by @sbkok. - Fix Shared ADF Lambda Layer builds and add move to ARM-64 Lambdas (#680) by @sbkok. - Add /adf params prefix and other SSM Parameter improvements (#695) by @sbkok, resolves #594 and #659. - Fix pipeline support for CodeBuild containers with Python < v3.10 (#705) by @sbkok. - Update CDK v2.136, SAM CLI 1.114, and others (#715) by @sbkok. - AWS CodeStar Connections name change to CodeConnections (#714) by @sbkok, resolves #616. - Adding retry logic for #655 and add tests for delete_default_vpc.py (#708) by @javydekoning, resolves #655. - Fix allow-empty-targets to match config boolean style (#725) by @sbkok. - Require previously optional CodeBuild image property in build/deploy from v4 onward (#731) by @sbkok, resolves #626 and #601. - YAML files are interpreted via `YAML.safe_load` instead of `YAML.load` (#732) by @sbkok. - Hardened all urlopen calls by checking the protocol (#732) by @sbkok. - Added check to ensure the CloudFormation deployment account id matches with the `/adf/deployment_account_id` if that exists (#732) by @sbkok. - Add automatic creation of the `/adf/deployment_account_id` and `/adf/management_account_id` if that does not exist (#732) by @sbkok. - Separate delete outdated state machine from pipeline creation state machines (#732) by @sbkok. - Review and restrict access provided by ADF managed IAM roles and permissions (#732) by @sbkok, resolves #608 and #390. - Add automatic clean-up of legacy bootstrap stacks, auto recreate if required (#732) by @sbkok. #### Installation improvements With the addition of CDK v2 support. The dependencies that go with it, unfortunately increased the deployment size beyond the limit that is supported by the Serverless Application Repository. Hence the SAR installer is replaced by a new installation process. Please read the [Installation Guide](https://github.com/awslabs/aws-deployment-framework/blob/make/latest/docs/installation-guide.md) how to install ADF. In case you are upgrading, please follow [the admin guide on updating ADF](https://github.com/awslabs/aws-deployment-framework/blob/make/latest/docs/admin-guide.md#updating-between-versions) instead. - New installation process (#677) by @sbkok. - Auto generate unique branch names on new version deployments (#682) by @sbkok. - Ensure tox fails at first pytest failure (#686) by @sbkok. - Install: Add checks to ensure installer dependencies are available (#702) by @sbkok. - Install: Add version checks and pre-deploy warnings (#726) by @sbkok. - Install: Add uncommitted changes check (#733) by @sbkok. #### Documentation, ADF GitHub, and code only improvements - Fixing broken Travis link and build badge (#625), by @javydekoning. - Temporarily disabled cfn-lint after for #619 (#630), by @javydekoning. - Upgrade MegaLinter to v7 and enable cfn-lint (#632), by @javydekoning. - Fix linter failures (#637) by @javydekoning. - Linter fixes (#646) by @javydekoning. - Add docs enhancement regarding ADF and AWS Control Tower (#638) by @AndyEfaa. - Fix include all tests in pytest.ini for bootstrap CodeBuild project (#621) by @AndyEfaa. - Remove CodeCommitRole from initial base stack (#663) by @alFReD-NSH. - Fix bootstrap pipeline tests (#679) by @sbkok. - Add AccessControl property on S3 Buckets (#681) by @sbkok. - Version bump GitHub actions (#704) by @javydekoning, resolves #698. - Bump express from 4.17.3 to 4.19.2 in /samples/sample-fargate-node-app (#697) by @dependabot. - Update copyright statements and license info (#713) by @sbkok. - Fix dead-link in docs (#707) by @javydekoning. - Add BASH_SHFMT linter + linter fixes (#709) by @javydekoning. - Fix sample expunge VPC, if-len, and process deployment maps (#716) by @sbkok. - Moving CDK example app to latest CDK version (#706) by @javydekoning, resolves #618. - Fix Markdown Anchor Link Check (#722) by @sbkok. - Improve samples (#718) by @sbkok. - Explain special purpose of adf-bootstrap/global.yml in docs (#730) by @sbkok, resolves #615. - Rename `deployment_account_bucket` to `shared_modules_bucket` (#732) by @sbkok. - Moved CodeCommit and EventBridge templates from lambda to the bootstrap repository to ease maintenance (#732) by @sbkok. --- .cspell.json | 1 + CHANGELOG.md | 572 +++++++- docs/admin-guide.md | 69 +- docs/installation-guide.md | 31 +- docs/providers-guide.md | 66 +- docs/user-guide.md | 90 ++ linters/custom-adf-dict.txt | 1 + samples/sample-cdk-app/buildspec.yml | 2 +- samples/sample-codebuild-vpc/buildspec.yml | 2 +- .../sample-ec2-with-codedeploy/buildspec.yml | 2 +- samples/sample-ecr-repository/buildspec.yml | 2 +- samples/sample-ecs-cluster/buildspec.yml | 2 +- samples/sample-ecs-cluster/template.yml | 3 + samples/sample-expunge-vpc/buildspec.yml | 2 +- .../build/generate_parameters.sh | 2 +- samples/sample-iam/buildspec.yml | 2 +- samples/sample-iam/template.yml | 3 + .../sample-mono-repo/apps/alpha/buildspec.yml | 2 +- .../sample-mono-repo/apps/beta/buildspec.yml | 2 +- samples/sample-rdk-rules/buildspec.yml | 2 +- .../build/generate_parameters.sh | 2 +- .../buildspec.yml | 2 +- samples/sample-terraform/buildspec.yml | 2 +- samples/sample-terraform/tf_apply.yml | 2 +- samples/sample-terraform/tf_destroy.yml | 2 +- samples/sample-terraform/tf_plan.yml | 2 +- samples/sample-vpc/buildspec.yml | 2 +- src/account_bootstrapping_jump_role.yml | 310 ++++ src/lambda_codebase/account/handler.py | 2 + src/lambda_codebase/account/main.py | 20 +- .../account/tests/test_main.py | 76 +- src/lambda_codebase/account_bootstrap.py | 49 +- .../configure_account_alias.py | 10 +- .../account_processing/create_account.py | 8 +- .../account_processing/delete_default_vpc.py | 12 +- .../account_processing/get_account_regions.py | 12 +- .../account_processing/requirements.txt | 2 + .../cleanup_legacy_stacks.py | 90 ++ .../cleanup_legacy_stacks/handler.py | 47 + .../cleanup_legacy_stacks/requirements.txt | 2 + .../cross_region_bucket/handler.py | 2 + .../cross_region_bucket/main.py | 31 +- .../deployment_account_config.py | 43 - src/lambda_codebase/event.py | 6 +- src/lambda_codebase/generic_account_config.py | 11 +- .../deployment/example-global-iam.yml | 7 +- .../adf-bootstrap/deployment/global.yml | 842 +++++++++-- .../determine_default_branch/handler.py | 2 + .../enable_cross_account_access.py | 7 +- .../iam_cfn_deploy_role_policy.py | 2 +- .../lambda_codebase/initial_commit/handler.py | 2 + .../generate_pipeline_inputs.py | 11 +- .../identify_out_of_date_pipelines.py | 3 +- .../process_deployment_map.py | 5 +- .../templates/codecommit.yml | 17 - .../pipeline_management/templates/events.yml | 50 - .../deployment/lambda_codebase/slack.py | 2 + .../deployment/pipeline_management.yml | 456 +++--- .../adf-bootstrap/deployment/regional.yml | 69 +- .../adf-bootstrap/example-global-iam.yml | 16 +- .../adf-bootstrap/global.yml | 296 +++- .../bootstrap_repository/adf-build/config.py | 24 +- .../bootstrap_repository/adf-build/global.yml | 156 -- .../bootstrap_repository/adf-build/main.py | 108 +- .../cdk/cdk_constructs/adf_codepipeline.py | 112 +- .../cdk/cdk_constructs/adf_notifications.py | 5 + .../shared/cdk/execute_pipeline_stacks.py | 6 +- .../adf-build/shared/generate_params.py | 2 +- .../shared/helpers/package_transform.sh | 4 +- .../helpers/retrieve_organization_accounts.py | 3 +- .../adf-build/shared/helpers/sts.sh | 2 +- .../shared/helpers/terraform/adf_terraform.sh | 8 +- .../shared/helpers/terraform/get_accounts.py | 15 +- .../adf-build/shared/python/cloudformation.py | 24 +- .../adf-build/shared/python/deployment_map.py | 2 +- .../shared/python/parameter_store.py | 19 + .../adf-build/shared/python/repo.py | 11 +- .../adf-build/shared/python/rule.py | 12 +- .../adf-build/shared/python/s3.py | 15 +- .../adf-build/shared/python/stepfunctions.py | 4 +- .../adf-build/shared/python/sts.py | 99 +- .../python/tests/stubs/stub_cloudformation.py | 4 +- .../python/tests/test_cloudformation.py | 83 +- .../shared/python/tests/test_partition.py | 6 +- .../adf-build/shared/python/tests/test_sts.py | 534 +++++++ .../adf-build/shared/resolver_upload.py | 13 +- .../adf-build/shared/templates/events.yml | 12 +- .../adf-build/tests/test_config.py | 17 + .../adf-build/tests/test_main.py | 15 +- .../bootstrap_repository/tox.ini | 2 +- src/lambda_codebase/initial_commit/handler.py | 2 + .../initial_commit/initial_commit.py | 6 + src/lambda_codebase/jump_role_manager/main.py | 542 +++++++ .../jump_role_manager/pytest.ini | 5 + .../jump_role_manager/requirements.txt | 2 + .../jump_role_manager/tests/__init__.py | 4 + .../jump_role_manager/tests/test_main.py | 1199 ++++++++++++++++ src/lambda_codebase/moved_to_root.py | 60 +- src/lambda_codebase/organization/handler.py | 2 + src/lambda_codebase/organization/main.py | 1 + .../organization_unit/handler.py | 2 + src/lambda_codebase/organization_unit/main.py | 1 + src/lambda_codebase/wait_until_complete.py | 10 +- src/template.yml | 1278 ++++++++++++----- tox.ini | 2 +- 105 files changed, 6331 insertions(+), 1494 deletions(-) create mode 100644 src/account_bootstrapping_jump_role.yml create mode 100644 src/lambda_codebase/cleanup_legacy_stacks/cleanup_legacy_stacks.py create mode 100644 src/lambda_codebase/cleanup_legacy_stacks/handler.py create mode 100644 src/lambda_codebase/cleanup_legacy_stacks/requirements.txt delete mode 100644 src/lambda_codebase/deployment_account_config.py delete mode 100644 src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/templates/codecommit.yml delete mode 100644 src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/templates/events.yml delete mode 100644 src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/global.yml create mode 100644 src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_sts.py create mode 100644 src/lambda_codebase/jump_role_manager/main.py create mode 100644 src/lambda_codebase/jump_role_manager/pytest.ini create mode 100644 src/lambda_codebase/jump_role_manager/requirements.txt create mode 100644 src/lambda_codebase/jump_role_manager/tests/__init__.py create mode 100644 src/lambda_codebase/jump_role_manager/tests/test_main.py diff --git a/.cspell.json b/.cspell.json index 8a68fa8cc..9761e735c 100644 --- a/.cspell.json +++ b/.cspell.json @@ -14,6 +14,7 @@ ], "ignorePaths": [ ".pylintrc", + "CHANGELOG.md", "requirements.txt", "requirements-dev.txt", "maven-wrapper.jar", diff --git a/CHANGELOG.md b/CHANGELOG.md index d19a068ad..506ba0ad1 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,8 +5,298 @@ specification](https://semver.org/spec/v2.0.0.html). ## Unreleased +--- + +## v4.0.0 + +This is a security-focused release of the AWS Deployment Framework (ADF) that +aims to restrict the default access required and provided by ADF via the +least-privilege principle. + +__Key security enhancements include:__ + +- Applying IAM best practices by restricting excessive permissions granted to + IAM roles and policies used by ADF. +- Leveraging new IAM features to further limit access privileges granted by + default, reducing the potential attack surface. +- Where privileged access is required for specific ADF use cases, the scope and + duration of elevated privileges have been minimized to limit the associated + risks. + +By implementing these security improvements, ADF now follows the principle of +least privilege, reducing the risk of unauthorized access or +privilege-escalation attacks. + +Please make sure to go through the list of changes breaking changes carefully. + +As with every release, it is strongly recommended to thoroughly review and test +this version of ADF in a non-production environment first. + ### Breaking changes +#### Security: Confused Deputy Problem + +Addressed the [Confused Deputy +problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html) +in IAM roles created by ADF to use by the AWS Services. Where supported, the +roles are restricted to specific resources via an `aws:SourceArn` condition. +If you were using the ADF roles for other resources or use cases not covered +by ADF, you might need to patch the Assume Role policies accordingly. + +#### Security: Cross-Account Access Role and the new Jump Role + +ADF relies on the privileged Cross-Account Access Role to bootstrap accounts. +In the past, ADF used this role for every update and deployment of the +bootstrap stacks, as well as account management features. + +With the release of v4.0, a jump role is introduced to lock-down the usage of +the privileged cross-account access role. Part of the bootstrap stack, the +`adf-bootstrap-update-deployment-role` is created. This role grants access to +perform restricted updates that are frequently performed via the +`aws-deployment-framework-bootstrap` pipeline. By default, the jump role is +granted access to assume into this update deployment role. + +A dedicated jump role manager is responsible to grant the jump role access to +the cross-account access role for AWS accounts where ADF requires access and +the `adf-bootstrap-update-deployment-role` is not available yet. +For example, accounts that are newly created only have the cross-account access +role to assume into. Same holds for ADF managed accounts that are not updated +to the new v4.0 bootstrap stack yet. + +During the installation/update of ADF, a new parameter enables you to grant +the jump role temporary access to the cross-account access role as an +privileged escalation path. +This parameter is called `GrantOrgWidePrivilegedBootstrapAccessUntil`. +By setting this to a date/time in the future you will grant access to the +cross-account access role until that date/time. This would be required if you +modify ADF itself or the bootstrap stack templates. Changing permissions like +the `adf-cloudformation-deployment-role` is possible without relying on the +cross-account access role. For most changes deployed via the bootstrap pipeline +it does not require elevated privileged access to update. + +With the above changes, the `aws-deployment-framework-bootstrap` CodeBuild +project no longer has unrestricted access to the privileged cross-account role. +Starting from version 4.0, access to assume the privileged cross-account access +role is restricted and must be obtained through the Jump Role as described +above. + +#### Security: Restricted account management access + +Account Management is able to access non-protected organization units. +Prior to ADF v4.0, the account management process used the privileged +cross-account assess role to operate. Hence it could move an account or update +the properties of an account that is located in a protected organization unit +too. With the release of v4.0, it is only able to move or manage accounts if +they are accessible via the Jump Role. The Jump Role is restricted to +non-protected organization units only. + +This enhances the security of ADF, as defining a organization unit as protected +will block access to that via the Jump Role accordingly. + +#### Security: Restricted bootstrapping of management account + +The `adf-global-base-adf-build` stack in the management account was initially +deployed to facilitate bootstrap access to the management account. +It accomplished this by creating a cross-account access role with limited +permissions in the management account ahead of the bootstrapping process. + +ADF created this role as it is not provisioned by AWS Organizations or +AWS Control Tower in the management account itself. However, ADF required some +level of access to deploy the necessary bootstrap stacks when needed. + +It is important to note that deploying this role and bootstrapping the +management account introduces a potential risk. A pipeline created via a +deployment map could target the management account and create resources within +it, which may have unintended consequences. + +To mitigate the potential risk, it is recommended to implement strict +least-privilege policies and apply permission boundaries to protect +the management account. +Additionally, thoroughly reviewing all deployment map changes is crucial to +ensure no unintended access is granted to the management account. + +With the release of ADF v4.0, the `adf-global-base-adf-build` stack is removed +and its resources are moved to the main ADF CloudFormation template. +These resources will only get deployed if the new +`AllowBootstrappingOfManagementAccount` parameter is set to `Yes`. By default +it will not allow bootstrapping of the management account. + +#### Security: Restricted bootstrapping of deployment account + +Considering the sensitive workloads that run in the deployment account, it is +important to limit the permissions granted for pipelines to deploy to the +deployment account itself. You should consider the deployment account a +production account. + +It is recommended to apply the least-privilege principle and only allow +pipelines to deploy resources that are required in the deployment account. + +Follow these steps after the changes introduced by the ADF v4.0 release are +applied in the main branch of the `aws-deployment-framework-bootstrap` +repository. + +Please take this moment to review the following: + +- Navigate to the `adf-boostrap/deployment` folder in that repository. +- Check if it contains a `global-iam.yml` file: + + - If it does __not__ contain a `global-iam.yml` file yet, please ensure you + copy the `example-global-iam.yml` file in that directory. + - If it does, please compare it against the `example-global-iam.yml` file + in that directory. + +- Apply the least-privilege principle on the permissions you grant in the + deployment account. + +#### Security: Shared Modules Bucket + +ADF uses the Shared Modules Bucket as hosted in the management account in the +main deployment region to share artifacts from the +`aws-deployment-framework-bootstrap` repository. + +The breaking change enforces all objects to be owned by the bucket owner from +v4.0 onward. + +#### Security: ADF Role policy restrictions + +With the v4.0 release, all ADF roles and policies were reviewed, applying +the latest best-practices and granting access to ADF resources only where +required. This review also includes the roles that were used by the pipelines +generated by ADF. + +Please be aware of the changes made to the following roles: + +##### adf-codecommit-role + +The `adf-codecommit-role` no longer grants read/write access to all buckets. +It only grants access to the buckets created and managed by ADF where it +needed to. Please grant access accordingly if you use custom S3 buckets or need +to copy from an S3 bucket in an ADF-generated pipeline. + +##### adf-codebuild-role + +The `adf-codebuild-role` can only be used by CodeBuild projects in the main +deployment region. ADF did not allow running CodeBuild projects in other +regions before. But in case you manually configured the role in a project +in a different region it will fail to launch. + +The `adf-codebuild-role` is no longer allowed to assume any IAM Role in the +target accounts if those roles would grant access in the Assume Role +Policy Document. + +The `adf-codebuild-role` is restricted to assume only the +`adf-readonly-automation-role` roles in the target accounts. +And, in the case that the Terraform ADF Extension is enabled, it is allowed to +assume the `adf-terraform-role` too. + +It is therefore not allowed to assume the `adf-cloudformation-deployment-role` +any longer. If you were deploying with `cdk deploy` into target accounts from an +ADF pipeline you will need to specifically grant the `adf-codebuild-role` +access to assume the `adf-cloudformation-deployment-role`. However, we strongly +recommend you synthesize the templates instead and let AWS CloudFormation do +the deployment for you. + +For Terraform support, CodeBuild was granted access to the `adf-tflocktable` +table in release v3.2.0. This access is restricted to only grant read/write +access to that table if the Terraform extension is enabled. +Please bear in mind that if you enable Terraform access the first time, you +will need to use the `GrantOrgWidePrivilegedBootstrapAccessUntil` parameter +if ADF v4.0 bootstrapped to accounts before. As this operation requires +privileged access. + +The `adf-codebuild-role` is allowed to assume into the +`adf-terraform-role` if the Terraform extension is enabled. +As written in the docs, the `adf-terraform-role` is configured +in the `global-iam.yml` file. This role is commented out by default. +When you define this role, it is important to make sure to grant it +[least-privilege access](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) +only. + +##### adf-cloudformation-role + +The `adf-cloudformation-role` is no longer assumable by CloudFormation. +This role is used by CodePipeline to orchestrate various deployment actions +across accounts. For example, CodeDeploy, S3, and obviously the CloudFormation +actions. + +For CloudFormation, it would instruct the service to use the CloudFormation +Deployment role for the actual deployment. +The CloudFormation deployment role is the role that is assumed by the +CloudFormation service. This change should not impact you, unless you +use this role in relation with CloudFormation that is not managed by ADF. + +With v4.0, the `adf-cloudformation-role` is only allowed to pass the +CloudFormation Deployment role to CloudFormation and no other roles to other +services. + +If you were/want to make use of a custom CloudFormation deployment role for +specific pipelines, you need to make sure that the `adf-cloudformation-role` is +allowed to perform an `iam:PassRole` action with the given role. +It is recommended to limit this to be passed to the CloudFormation service +only. You can find an example of this in the +`adf-bootstrap/deployment/global.yml` file where it allows the +CloudFormation role to perform `iam:PassRole` with the +`adf-cloudformation-deployment-role`. When required, please grant this access +in the `adf-bootstrap/deployment/global-iam.yml` file in the +`aws-deployment-framework-bootstrap` repository. + +Additionally, the `adf-cloudformation-role` is not allowed to access S3 buckets +except the ADF buckets it needs to transfer pipeline assets to CloudFormation. + +##### adf-codepipeline-role + +The `adf-codepipeline-role` is no longer assumable by CloudFormation, +CodeDeploy, and S3. The role itself was not passed to any of these services by +ADF. + +If you relied on the permissions that were removed, feel free to extend the +role permissions via the `global-iam.yml` stack. + +#### Security: Restricted access to ADF-managed S3 buckets only + +With v4.0, access is restricted to ADF-managed S3 buckets only. +If a pipeline used the S3 source or deployment provider, it will require +the required access to those buckets. Please add the required access to the +`global-iam.yml` bootstrap stack in the OU where it is hosted. + +Grant read access to the `adf-codecommit-role` for S3 source buckets. +Grant write access to the `adf-cloudformation-role` for S3 buckets an ADF +pipeline deploys to. + +#### Security: Bootstrap stack no longer named after organization unit + +The global and regional bootstrap stacks are renamed to +`adf-global-base-bootstrap` and `adf-regional-base-bootstrap` respectively. + +In prior releases of ADF, the name ended with the organization unit name. +As a result, an account could not move from one organization unit to +another without first removing the bootstrap stacks. Additionally, it made +writing IAM policies and SCPs harder in a least-privilege way. + +When ADF v4.0 is installed, the legacy stacks will get removed by the +`aws-deployment-framework-bootstrap` pipeline automatically. Shortly after +removal, it will deploy the new bootstrap stacks. + +With v4.0, accounts can move from one organization unit to another, +without requiring the removal of the ADF bootstrap stacks. + +#### Security: KMS Encryption required on Deployment Account Pipeline Buckets + +The deployment account pipeline buckets only accepts KMS Encrypted objects from +v4.0 onward. Ensuring that all objects are encrypted with the same KMS Key. + +Before, some objects used KMS encryption while others did not. The bucket +policy now requires all objects to be encrypted via the KMS key. All ADF +components have been adjusted to upload with this key. If, however, you copy +files from systems that are not managed by ADF, you will need to adjust these +to encrypt the objects with the KMS key as well. + +#### Security: TLS Encryption required on all ADF-managed buckets + +S3 Buckets created by ADF will require TLS 1.2 or later. All actions that occur +on these buckets with older TLS versions will be denied via the bucket policies +that these buckets received. + #### New installer The dependencies that are bundled by the move to the AWS Cloud Development Kit @@ -29,7 +319,7 @@ guide](https://github.com/awslabs/aws-deployment-framework/blob/master/docs/admi ADF v4.0 is built on the AWS Cloud Development Kit (CDK) v2. Which is an upgrade to CDK v1 that ADF relied on before. -For most end-users, this change would not have an impact. +For most end-users, this change would not have an immediate impact. If, however, you made customizations to ADF it might require you to upgrade these customizations to CDK v2 as well. @@ -51,6 +341,39 @@ to deploy after. Most likely all pipelines already define the CodeBuild image to use, as the previous default image is [not supported by AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html#deprecated-images). +#### ADF Renaming of Roles + +ADF v4.0 changes most of the roles that it relies on. The reason for this +change is to make it easier to secure ADF with Service Control Policies and +IAM permission boundaries. Where applicable, the roles received a new prefix. +This makes it easier to identify what part of ADF relies on those roles and +whom should have access to assume the role or modify it. + +| Previous prefix | Previous name | New prefix | New name | +|------------------|---------------------------------------------------------------------|----------------------------|---------------------------------------------------------------| +| / | ${CrossAccountAccessRoleName}-readonly | /adf/organizations/ | adf-organizations-readonly | +| / | adf-update-cross-account-access-role | /adf/bootstrap/ | adf-update-cross-account-access | +| /adf-automation/ | adf-create-repository-role | /adf/pipeline-management/ | adf-pipeline-management-create-repository | +| /adf-automation/ | adf-pipeline-provisioner-generate-inputs | /adf/pipeline-management/ | adf-pipeline-management-generate-inputs | +| /adf-automation/ | adf-pipeline-create-update-rule | /adf/pipeline-management/ | adf-pipeline-management-create-update-rule | +| / | adf-event-rule-${AWS::AccountId}-${DeploymentAccountId}-EventRole-* | /adf/cross-account-events/ | adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId} | +|------------------|---------------------------------------------------------------------|----------------------------|---------------------------------------------------------------| + +#### ADF Renaming of Resources + +| Type | Previous name | New name | +|--------------|-----------------------------------------------|--------------------------------------------------------| +| StateMachine | EnableCrossAccountAccess | adf-bootstrap-enable-cross-account | +| StateMachine | ADFPipelineManagementStateMachine | adf-pipeline-management | +| StateMachine | PipelineDeletionStateMachine-* | adf-pipeline-management-delete-outdated | +| Lambda | DeploymentMapProcessorFunction | adf-pipeline-management-deployment-map-processor | +| Lambda | ADFPipelineCreateOrUpdateRuleFunction | adf-pipeline-management-create-update-rule | +| Lambda | ADFPipelineCreateRepositoryFunction | adf-pipeline-management-create-repository | +| Lambda | ADFPipelineGenerateInputsFunction | adf-pipeline-management-generate-pipeline-inputs | +| Lambda | ADFPipelineStoreDefinitionFunction | adf-pipeline-management-store-pipeline-definition | +| Lambda | ADFPipelineIdentifyOutOfDatePipelinesFunction | adf-pipeline-management-identify-out-of-date-pipelines | +|--------------|-----------------------------------------------|--------------------------------------------------------| + #### ADF Parameters in AWS Systems Manager Parameter Store Some of the parameters stored by ADF in AWS Systems Manager Parameter Store @@ -60,6 +383,26 @@ and restrict access to the limited set of ADF specific parameters. With ADF v4.0, the parameters used by ADF are located under the `/adf/` prefix. For example, `/adf/deployment_account_id`. +The `global-iam.yml` bootstrap stack templates get copied from their +`example-global-iam.yml` counterparts. When this was copied in v3.2.0, the +default path for the `deployment_account_id` parameter should be updated to +`/adf/deployment_account_id`. Please apply this new default value to the +CloudFormation templates accordingly. If you forget to do this, the stack +deployment of the `adf-global-base-iam` stack might fail with a failure stating +that it does not have permission to fetch the `deployment_account_id` +parameter. + +The error you run into if the parameter path is not updated: + +> An error occurred (ValidationError) when calling the CreateChangeSet +> operation: User: +> arn:aws:sts::111111111111:assumed-role/${CrossAccountAccessRoleName}/base_update +> is not authorized to perform: ssm:GetParameters on resource: +> arn:aws:ssm:${deployment_region}:111111111111:parameter/deployment_account_id +> because no identity-based policy allows the ssm:GetParameters action +> (Service: AWSSimpleSystemsManagement; Status Code: 400; +> Error Code: AccessDeniedException; Request ID: xxx). + If an application or customization to ADF relies on one of these parameters they will need to be updated to include this prefix. Unless the application code relies on ADF's ParameterStore class, in that case it will automatically @@ -113,7 +456,7 @@ For the __deployment account__, in __the deployment region__: | `/auto_create_repositories` | `/adf/scm/auto_create_repositories` | | `/cross_account_access_role` | `/adf/cross_account_access_role` | | `/default_scm_branch` | `/adf/scm//default_scm_branch` | -| `/deployment_account_bucket` | `/adf/deployment_account_bucket` | +| `/deployment_account_bucket` | `/adf/shared_modules_bucket` | | `/master_account_id` | `/adf/management_account_id` | | `/notification_endpoint` | `/adf/notification_endpoint` | | `/notification_type` | `/adf/notification_type` | @@ -126,7 +469,7 @@ For the __deployment account__, in __other ADF regions__: | `/adf_log_level` | `/adf/adf_log_level` | | `/adf_version` | `/adf/adf_version` | | `/cross_account_access_role` | `/adf/cross_account_access_role` | -| `/deployment_account_bucket` | `/adf/deployment_account_bucket` | +| `/deployment_account_bucket` | `/adf/shared_modules_bucket` | | `/master_account_id` | `/adf/management_account_id` | | `/notification_endpoint` | `/adf/notification_endpoint` | | `/notification_type` | `/adf/notification_type` | @@ -188,6 +531,229 @@ configuration as defined in the [Admin Guide - Using AWS CodeConnections for Bitbucket, GitHub, or GitLab](./docs/admin-guide.md#using-aws-codeconnections-for-bitbucket-github-or-gitlab). +### Features + +- Update CDK from v1 to v2 (#619), by @pergardebrink, resolves #503, #614, and + #617. +- Account Management State Machine will now opt-in to target regions when + creating an account (#604) by @StewartW. +- Add support for nested organization unit targets (#538) by @StewartW, + resolves #20. +- Enable single ADF bootstrap and pipeline repositories to multi-AWS + Organization setup, resolves #410: + + - Introduce the org-stage (#636) by @AndyEfaa. + - Add support to allow empty targets in deployment maps (#634) by + @AndyEfaa. + - Add support to define the "default-scm-codecommit-account-id" in + adfconfig.yml, no value in either falls back to deployment account id + (#633) by @AndyEfaa. + - Add multi AWS Organization support to adfconfig.yml (#668) by + @alexevansigg. + - Add multi AWS Organization support to generate_params.py (#672) by + @AndyEfaa. + +- Terraform: add support for distinct variable files per region per account in + Terraform pipelines (#662) by @igordust, resolves #661. +- CodeBuild environment agnostic custom images references, allowing to specify + the repository name or ARN of the ECR repository to use (#623) by @abhi1094. +- Add kms_encryption_key_arn and cache_control parameters to S3 deploy + provider (#669) by @alFReD-NSH. +- Allow inter-ou move of accounts (#712) by @sbkok. + +### Fixes + +- Fix Terraform terrascan failure due to incorrect curl call (#607), by + @lasv-az. +- Fix custom pipeline type configuration not loaded (#612), by @lydialim. +- Fix Terraform module execution error (#600), by @stemons, resolves #599 and + #602. +- Fix resource untagging permissions (#635) by @sbkok. +- Fix GitHub Pipeline secret token usage (#645) by @sbkok. +- Fix Terraform error masking by tee (#643) by @igordust, resolves #642. +- Fix create repository bug when in rollback complete state (#648) by + @alexevansigg. +- Fix cleanup of parameters upon pipeline retirement (#652) by @sbkok. +- Fix wave calculation for non-default CloudFormation actions and multi-region + deployments (#624 and #651), by @alexevansigg. +- Fix ChatBot channel ref + add notification management permissions (#650) by + @sbkok. +- Improve docs and add CodeStar Connection policy (#649) by @sbkok. +- Fix Terraform account variables were not copied correctly (#665) by + @donnyDonowitz, resolves #664. +- Fix pipeline management state machine error handling (#683) by @sbkok. +- Fix target schema for tags (#667) by @AndyEfaa. +- Fix avoid overwriting truncated pipeline definitions with pipelines that + share the same start (#653) by @AndyEfaa. +- Fix updating old global-iam stacks in the deployment account (#711) by + @sbkok. +- Remove default org-stage reference to dev (#717) by @alexevansigg. +- Fix racing condition on first-usage of ADF pipelines leading to an auth + error (#732) by @sbkok. +- Fix support for custom S3 deployment roles (#732) by @sbkok, resolves #355. +- Fix pipeline completion trigger description (#734) by @sbkok, resolves #654. + +### Improvements + +- Sanitizing account names before using them in SFn Invocation (#598) by + @StewartW, resolves #597. +- Improve Terraform documentation sample (#605), by @lasv-az. +- Fix CodeDeploy sample to work in gov-cloud (#609), by @sbkok. +- Fix documentation error on CodeBuild custom image (#622), by @abhi1094. +- Speedup bootstrap pipeline by removing unused SAM Build (#613), by + @AlexMackechnie. +- Upgrade CDK (v2.88), SAM (v1.93), and others to latest compatible version + (#647) by @sbkok, resolves #644. +- Update pip before installing dependencies (#606) by @lasv-az. +- Fix: Adding hash to pipelines processing step function execution names to + prevent collisions (#641) by @avolip, resolves #640. +- Modify trust relations for roles to ease redeployment of roles (#526) by + @AndreasAugustin, resolves #472. +- Limit adf-state-machine-role to what is needed (#657) by @alFReD-NSH. +- Upload SCP policies with spaces removed (#656) by @alFReD-NSH. +- Move from ACL enforced bucket ownership to Ownership Controls + MegaLinter + prettier fix (#666) by @sbkok. +- Upgrade CDK (v2.119), SAM (v1.107), Jinja2 (v3.1.3), and others to latest + compatible version (#676) by @sbkok. +- Fix initial value type of allow-empty-targets (#678) by @sbkok. +- Fix Shared ADF Lambda Layer builds and add move to ARM-64 Lambdas (#680) by + @sbkok. +- Add /adf params prefix and other SSM Parameter improvements (#695) by @sbkok, + resolves #594 and #659. +- Fix pipeline support for CodeBuild containers with Python < v3.10 (#705) by + @sbkok. +- Update CDK v2.136, SAM CLI 1.114, and others (#715) by @sbkok. +- AWS CodeStar Connections name change to CodeConnections (#714) by @sbkok, + resolves #616. +- Adding retry logic for #655 and add tests for delete_default_vpc.py (#708) by + @javydekoning, resolves #655. +- Fix allow-empty-targets to match config boolean style (#725) by @sbkok. +- Require previously optional CodeBuild image property in build/deploy from v4 + onward (#731) by @sbkok, resolves #626 and #601. +- YAML files are interpreted via `YAML.safe_load` instead of `YAML.load` (#732) + by @sbkok. +- Hardened all urlopen calls by checking the protocol (#732) by @sbkok. +- Added check to ensure the CloudFormation deployment account id matches with + the `/adf/deployment_account_id` if that exists (#732) by @sbkok. +- Add automatic creation of the `/adf/deployment_account_id` and + `/adf/management_account_id` if that does not exist (#732) by @sbkok. +- Separate delete outdated state machine from pipeline creation state machines + (#732) by @sbkok. +- Review and restrict access provided by ADF managed IAM roles and permissions + (#732) by @sbkok, resolves #608 and #390. +- Add automatic clean-up of legacy bootstrap stacks, auto recreate if required + (#732) by @sbkok. + +#### Installation improvements + +With the addition of CDK v2 support. The dependencies that go with it, +unfortunately increased the deployment size beyond the limit that is supported +by the Serverless Application Repository. Hence the SAR installer is replaced +by a new installation process. +Please read the [Installation +Guide](https://github.com/awslabs/aws-deployment-framework/blob/make/latest/docs/installation-guide.md) +how to install ADF. +In case you are upgrading, please follow [the admin guide on updating +ADF](https://github.com/awslabs/aws-deployment-framework/blob/make/latest/docs/admin-guide.md#updating-between-versions) +instead. + +- New installation process (#677) by @sbkok. +- Auto generate unique branch names on new version deployments (#682) by + @sbkok. +- Ensure tox fails at first pytest failure (#686) by @sbkok. +- Install: Add checks to ensure installer dependencies are available (#702) by @sbkok. +- Install: Add version checks and pre-deploy warnings (#726) by @sbkok. +- Install: Add uncommitted changes check (#733) by @sbkok. + +#### Documentation, ADF GitHub, and code only improvements + +- Fixing broken Travis link and build badge (#625), by @javydekoning. +- Temporarily disabled cfn-lint after for #619 (#630), by @javydekoning. +- Upgrade MegaLinter to v7 and enable cfn-lint (#632), by @javydekoning. +- Fix linter failures (#637) by @javydekoning. +- Linter fixes (#646) by @javydekoning. +- Add docs enhancement regarding ADF and AWS Control Tower (#638) by @AndyEfaa. +- Fix include all tests in pytest.ini for bootstrap CodeBuild project (#621) by + @AndyEfaa. +- Remove CodeCommitRole from initial base stack (#663) by @alFReD-NSH. +- Fix bootstrap pipeline tests (#679) by @sbkok. +- Add AccessControl property on S3 Buckets (#681) by @sbkok. +- Version bump GitHub actions (#704) by @javydekoning, resolves #698. +- Bump express from 4.17.3 to 4.19.2 in /samples/sample-fargate-node-app (#697) + by @dependabot. +- Update copyright statements and license info (#713) by @sbkok. +- Fix dead-link in docs (#707) by @javydekoning. +- Add BASH_SHFMT linter + linter fixes (#709) by @javydekoning. +- Fix sample expunge VPC, if-len, and process deployment maps (#716) by @sbkok. +- Moving CDK example app to latest CDK version (#706) by @javydekoning, + resolves #618. +- Fix Markdown Anchor Link Check (#722) by @sbkok. +- Improve samples (#718) by @sbkok. +- Explain special purpose of adf-bootstrap/global.yml in docs (#730) by @sbkok, + resolves #615. +- Rename `deployment_account_bucket` to `shared_modules_bucket` (#732) by @sbkok. +- Moved CodeCommit and EventBridge templates from lambda to the bootstrap + repository to ease maintenance (#732) by @sbkok. + +--- + +## v3.2.1 + +It is strongly recommended to upgrade to v4.0 or later as soon as possible. +The security fixes introduced in v4.0 are not ported back to v3 due to the +requirement of breaking changes. +Continued use of v3 or earlier versions is strongly discouraged. + +The upcoming v4 release will introduce breaking changes. As always, it is +recommended to thoroughly review and test the upgrade procedure in a +non-production environment before upgrading in production. + +ADF v3.2.0 had a few issues that prevented clean installation in new +environments, making it harder to test the upgrade process. This release, +v3.2.1, resolves those installation issues and includes an updated installer +for ADF to simplify the installation process. + +We hope this shortens the time required to prepare for the v4 upgrade. + +--- + +### Fixes + +- Fix management account config alias through ADF account management (#596) by + @sbkok. +- Fix CodeBuild stage naming bug (#628) by @pozeus, resolves #627. +- Fix Jinja2 template rendering with autoescape enabled (#690) by @sujay0412. +- Fix missing deployment_account_id and initial deployment global IAM bootstrap + (#686) by @sbkok, resolves #594 and #659. +- Fix permissions to enable delete default VPC in management account (#699) by + @sbkok. +- Fix tagging of Cross Account Access role in the management account (#700) by + @sbkok. +- Fix CloudFormation cross-region changeset approval (#701) by @sbkok. +- Fix clean bootstrap of the deployment account (#703) by @sbkok, resolves #696. +- Bump Jinja2 from 3.1.3 to 3.1.4 (#720 and #721) by @dependabot. +- Fix account management lambdas in v3.2 (#729) by @sbkok. +- Fix management account missing required IAM Tag Role permission in v3.2 + (#729) by @sbkok. + +--- + +### Installation enhancements + +This release is the first release with the new installation process baked in. +Please read the [Installation Guide](https://github.com/awslabs/aws-deployment-framework/blob/make/latest/docs/installation-guide.md) +how to install ADF. In case you are upgrading, please follow [the admin guide +on updating ADF](https://github.com/awslabs/aws-deployment-framework/blob/make/latest/docs/admin-guide.md#updating-between-versions) +instead. + +Changes baked into this release to support the new installation process: + +- New installation process (#677) by @sbkok. +- Ensure tox fails at first pytest failure (#686) by @sbkok. +- Install: Add checks to ensure installer dependencies are available (#702) by @sbkok. +- Install: Add version checks and pre-deploy warnings (#726) by @sbkok. +- Install: Add uncommitted changes check (#733) by @sbkok. + --- ## v3.2.0 diff --git a/docs/admin-guide.md b/docs/admin-guide.md index 795806b41..731eda722 100644 --- a/docs/admin-guide.md +++ b/docs/admin-guide.md @@ -1036,7 +1036,7 @@ In the management account in `us-east-1`: 2. There might be a pull request if the `aws-deployment-framework-bootstrap` repository that you have has to be updated to apply recent changes of ADF. This would show up with the version that you deployed recently, for example - `v3.2.0`. + `v4.0.0`. 3. If there is no pull request, nothing to worry about. In that case, no changes were required in your repository for this update. Continue to the next step. If there is a pull request, open it and review the @@ -1091,7 +1091,7 @@ This process is managed in an AWS Step Function state machine. 1. Navigate to the AWS Step Functions service in the deployment account in _your main region_. -2. Check the `ADFPipelineManagementStateMachine` state machine, all recent +2. Check the `adf-pipeline-management` state machine, all recent invocations since we performed the update should succeed. We need to confirm that the pipelines generated by ADF are fully functional @@ -1138,45 +1138,38 @@ Alternatively, you can also perform the update using the AWS CLI. If you wish to remove ADF you can delete the CloudFormation stack named `serverlessrepo-aws-deployment-framework` in the management account in -the `us-east-1` region. This will move into a `DELETE_FAILED` at some stage because -there is an S3 Bucket that is created via a custom resource _(cross region)_. -After it moves into `DELETE_FAILED`, you can right-click on the stack and hit -delete again while selecting to skip the Bucket the stack will successfully -delete, you can then manually delete the bucket and its contents. - -After the main stack has been removed you can remove the base stack in the -deployment account `adf-global-base-deployment` and any associated regional +the `us-east-1` region. This will remove most resources created by ADF +in the management account. With the exception of S3 buckets and SSM parameters. +If you bootstrapped ADF into the management account you need to manually remove +the bootstrap stacks as well. + +Feel free to delete the S3 buckets, SSM parameters that start with the `/adf` +prefix, as well as other CloudFormation stacks such as: + +- adf-global-base-bootstrap (in the main deployment region) +- adf-global-base-iam (in the main deployment region) +- adf-regional-base-bootstrap (in every other region configured for ADF) + +When these stacks are removed, you can switch into the deployment +account. We need to remove the base stack in the deployment account +`adf-global-base-deployment` and any associated regional deployment account base stacks. After you have deleted these stacks, you can manually remove any base stacks from accounts that were bootstrapped. + Alternatively prior to removing the initial `serverlessrepo-aws-deployment-framework` stack, you can set the _moves_ section of the `adfconfig.yml` file to _remove-base_ which would automatically clean up the base stack when the account is moved to the Root of the AWS Organization. One thing to keep in mind if you are planning to re-install ADF is that you -will want to clean up the parameter from SSM Parameter Store named -_deployment_account_id_ in `us-east-1` on the management account. AWS Step -Functions uses this parameter to determine if ADF has already got a deployment -account setup. If you re-install ADF with this parameter set to a value, -ADF will attempt an assume role to the account to do some work, which will fail -since that role will not be on the account at that point. - -There is also a CloudFormation stack named `adf-global-base-adf-build` which -lives on the management account in your main deployment region. This stack -creates two roles on the management account after the deployment account has -been setup. These roles allow the deployment accounts CodeBuild role to assume a -role back to the management account in order to query Organizations for AWS -Accounts. This stack must be deleted manually also. If you do not remove this -stack and then perform a fresh install of ADF, AWS CodeBuild on the deployment -account will not be able to assume a role to the management account to query -AWS Organizations. This is because this specific stack creates IAM roles with a -strict trust relationship to the CodeBuild role on the deployment account, if -that role gets deleted _(Which is will when you delete -`adf-global-base-deployment`)_ then this stack references invalid IAM roles that -no longer exist. If you forget to remove this stack and notice the trust -relationship of the IAM roles referenced in the stack are no longer valid, -you can delete the stack and re-run the main bootstrap pipeline which will -recreate it with valid roles and links to the correct roles. +will want to clean up the parameter from SSM Parameter Store. You can safely +remove all `/adf` prefixed SSM parameters. But most importantly, you need to +remove the `/adf/deployment_account_id` in `us-east-1` on the +management account. +As AWS Step Functions uses this parameter to determine if ADF has already got a +deployment account setup. If you re-install ADF with this parameter set to a +value, ADF will attempt an assume role to the account to configure it, which +will fail since that role will not be on the account at that point. ## Troubleshooting @@ -1234,15 +1227,15 @@ The main components to look at are: deployment region. 8. Navigate to the [AWS Step Functions service](https://eu-west-1.console.aws.amazon.com/states/home?region=eu-west-1#/statemachines) in the deployment account in your main region. Please note, the link points - to the `eu-west-` region. Please update that to your own deployment region. - Check the state machines named `ADFPipelineManagementStateMachine`, - `EnableCrossAccountAccess`, and `PipelineDeletionStateMachine...`. - Look at recent executions only. + to the `eu-west-1` region. Please update that to your own deployment region. + Check the state machines named `adf-pipeline-management`, + `adf-bootstrap-enable-cross-account`, and + `adf-pipeline-management-delete-outdated`. Look at recent executions only. - When you find one that has a failed execution, check the components that are marked orange/red in the diagram. - If one failed and you want to trigger it again, you can execute it with the `New Execution` button in AWS Step Functions. Or even better in case - of the `ADFPipelineManagementStateMachine`, trigger all executions again, + of the `adf-pipeline-management`, trigger all executions again, Release a Change in the [ADF Pipeline generation CodePipeline - aws-deployment-framework-pipelines](https://console.aws.amazon.com/codesuite/codepipeline/pipelines/aws-deployment-framework-pipelines/view?region=eu-west-1). diff --git a/docs/installation-guide.md b/docs/installation-guide.md index b70b673a9..1ae295b40 100644 --- a/docs/installation-guide.md +++ b/docs/installation-guide.md @@ -35,7 +35,9 @@ AWS Control Tower prior to installing ADF.** --------------------------------- -## 1. Enable CloudTrail +## 1. Enable Services + +### 1.1. Enable CloudTrail Ensure you have setup [AWS CloudTrail](https://aws.amazon.com/cloudtrail/) *(Not the default trail)* in your Management Account that spans **all @@ -49,6 +51,28 @@ instructions](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtr to configure the CloudTrail in the `us-east-1` region within the AWS Organizations Management AWS Account. +### 1.2. Enable AWS Organizations API Access + +ADF will setup and configure [AWS +Organizations](https://us-east-1.console.aws.amazon.com/organizations/v2/home?region=us-east-1) +automatically. + +However, ADF requires, but does not configure AWS Account Management +automatically. + +Without configuring AWS Account Management, the `adf-account-management` Step +Functions state machine will fail to configure the AWS accounts such as the +deployment account for you. The error message that it would return would state: + +> An error occurred (AccessDeniedException) when calling the ListRegions operation: +> User: arn:[...assumed-sts-role-arn...]/adf-account-management-config-region +> is not authorized to perform: account:ListRegions +> (Your organization must first enable trusted access with AWS Account Management.) + +To enable this, go to AWS Organizations service console after it is configured +and [enable AWS Account Management via this +link](https://us-east-1.console.aws.amazon.com/organizations/v2/home/services/AWS%20Account%20Management). + ## 2. Setup Your Build Environment ### 2.1. Local Instructions @@ -191,7 +215,7 @@ You can checkout a specific version by running: git checkout ${version_tag_goes_here} # For example: -git checkout v3.2.0 +git checkout v4.0.0 ``` ### 3.3. Update Makefile @@ -624,6 +648,9 @@ automatically in the background, to follow its progress: open AWS CodePipeline from within the management account in `us-east-1` and see that there is an initial pipeline execution that started. + Upon first installation, this pipeline might fail to fetch the source + code from the repository. Click the retry failed action button to try again. + When ADF is deployed for the first-time, it will make the initial commit with the skeleton structure of the `aws-deployment-framework-bootstrap` CodeCommit repository. diff --git a/docs/providers-guide.md b/docs/providers-guide.md index a558fd388..7645f6b3c 100644 --- a/docs/providers-guide.md +++ b/docs/providers-guide.md @@ -87,10 +87,12 @@ Provider type: `codecommit`. information on the use of the owner attribute can be found in the [CodePipeline documentation](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ActionTypeId.html). -- *role* - *(String)* default ADF managed role. - - The role to use to fetch the contents of the CodeCommit repository. Only - specify when you need a specific role to access it. By default ADF will use - its own role to access it instead. +- *role* - *(String)* default: `adf-codecommit-role`. + - The role name of the role to use to fetch the contents of the CodeCommit + repository. Only specify when you need a specific role to access it. + By default ADF will use its own role to access it instead. + - Please read the [user guide](./user-guide.md#custom-roles-for-pipelines) to + learn more about creating custom roles. - *trigger_on_changes* - *(Boolean)* default: `True`. - Whether CodePipeline should release a change and trigger the pipeline. - **When set to False**, you either need to trigger the pipeline manually, @@ -114,9 +116,15 @@ S3 can be used as the source for a pipeline too. **Please note:** you can use S3 as a source and deployment provider. The properties that are available are slightly different. -The role used to fetch the object from the S3 bucket is: +The default role used to fetch the object from the S3 bucket is: `arn:${partition}:iam::${source_account_id}:role/adf-codecommit-role`. +Please add the required S3 read permissions to the `adf-codecomit-role` via the +`adf-bootstrap/deployment/global-iam.yml` file in the +`aws-deployment-framework-bootstrap` repository. Or, allow +the `adf-codecommit-role` S3 read permissions in the bucket policy of the +source bucket. + Provider type: `s3`. #### Properties @@ -277,6 +285,8 @@ Provider type: `codebuild`. **Please note:** Since the CodeBuild environment runs in the deployment account, the role you specify will be assumed in and should be available in the deployment account too. + - Please read the [user guide](./user-guide.md#custom-roles-for-pipelines) to + learn more about creating custom roles. - *timeout* *(Number)* in minutes, default: `20`. - If you wish to define a custom timeout for the Build stage. - *privileged* *(Boolean)* default: `False`. @@ -452,12 +462,15 @@ Provider type: `codedeploy`. - The name of the CodeDeploy Application you want to use for this deployment. - *deployment_group_name* *(String)* **(required)** - The name of the Deployment Group you want to use for this deployment. -- *role* - *(String)* default - `arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-role`. +- *role* - *(String)* default `adf-cloudformation-role` + - Automatically assumes into the given role in the target account, i.e. + `arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-role`. - The role you would like to use on the target AWS account to execute the CodeDeploy action. The role should allow the CodeDeploy service to assume it. As is [documented in the CodeDeploy service role documentation](https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-service-role.html). + - Please read the [user guide](./user-guide.md#custom-roles-for-pipelines) to + learn more about creating custom roles. ### CloudFormation @@ -513,11 +526,23 @@ Provider type: `cloudformation`. to `infra`. - **Defaults to empty string**, the root of the source repository or input artifact. -- *role* - *(String)* default - `arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-deployment-role`. +- *role* - *(String)* default `adf-cloudformation-deployment-role` + - Automatically assumes into the given role in the target account, i.e. + `arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-deployment-role`. - The role you would like to use on the target AWS account to execute the - CloudFormation action. Ensure that the CloudFormation service should be - allowed to assume that role. + CloudFormation action. + - Ensure that the CloudFormation service should be allowed to assume that + role. + - Additionally, make sure that the `adf-cloudformation-role` is allowed to + perform an `iam:PassRole` action with the given role. Restrict this action + for the CloudFormation service only. + You can find an example of this in the `adf-bootstrap/deployment/global.yml` + file where it allows the CloudFormation Role to perform `iam:PassRole` with + the `adf-cloudformation-deployment-role`. + Please grant this access in the `adf-bootstrap/deployment/global-iam.yml` + file in the `aws-deployment-framework-bootstrap` repository. + - Please read the [user guide](./user-guide.md#custom-roles-for-pipelines) to + learn more about creating custom roles. - *action* - (`CHANGE_SET_EXECUTE|CHANGE_SET_REPLACE|CREATE_UPDATE|DELETE_ONLY|REPLACE_ON_FAILURE`) default: `CHANGE_SET_EXECUTE`. @@ -586,7 +611,7 @@ Provider type: `service_catalog`. ### S3 -S3 can use used to deploy with too. +S3 is available as a source and deployment provider. S3 cannot be used to target multiple accounts or regions in one stage. As the `bucket_name` property needs to be defined and these are globally @@ -597,9 +622,17 @@ instead. Where each will target the specific bucket in the target account. Please note: you can use S3 as a source and deployment provider. The properties that are available are slightly different. -The role used to upload the object(s) to the S3 bucket is: +When S3 is used as the deployment provider, the default role used to upload +the object(s) to the S3 bucket is the: `arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-role`. +The `adf-cloudformation-role` is not granted access to read S3 buckets yet. +Please add the required S3 write permissions to the `adf-cloudformation-role` +via the `adf-bootstrap/global-iam.yml` file in the +`aws-deployment-framework-bootstrap` repository. Or, alternatively, allow +the `adf-cloudformation-role` S3 write permissions in the bucket policy of the +target bucket. + Provider type: `s3`. #### Properties @@ -611,9 +644,10 @@ Provider type: `s3`. - *extract* - *(Boolean)* default: `False`. - Whether CodePipeline should extract the contents of the object when it deploys it. -- *role* - *(String)* default: - `arn:${partition}:iam::${target_account_id}:role/adf-cloudformation-role`. - - The role you would like to use for this action. +- *role* - *(String)* default: `adf-cloudformation-role`. + - The role name of the role you would like to use for this action. + - Please read the [user guide](./user-guide.md#custom-roles-for-pipelines) to + learn more about creating custom roles. - *kms_encryption_key_arn* - *(String)* - The ARN of the AWS KMS encryption key for the host bucket. The `kms_encryption_key_arn` parameter encrypts uploaded artifacts with the diff --git a/docs/user-guide.md b/docs/user-guide.md index fe11f003f..ec19f63ef 100644 --- a/docs/user-guide.md +++ b/docs/user-guide.md @@ -242,6 +242,96 @@ AWS CloudFormation. For detailed information on providers and their supported properties, see the [providers guide](./providers-guide.md). +### Custom roles for pipelines + +Most providers allow you to define a role to use when actions need to be +performed by the pipeline. For example, you could use a specific deployment +role to create security infrastructure. Allowing you to configure the pipeline +with least privilege, only granting access to the actions it requires to +perform the task. While securing those resources from modifications by other +pipelines that do not have access to this role. + +There are three types of roles, source, build and deploy. +Please follow the guidelines below to define the role correctly. +As always, it is important to grant these roles [least-privilege +access](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege). + +For each of these roles, it is important to create the role ahead. So the +pipeline can assume into it. For example, by defining these roles in the +`global-iam.yml` file for the given organization units in the +`aws-deployment-framework-bootstrap` repository. See the [admin guide for more +details regarding this](./admin-guide.md#bootstrapping-accounts). + +**Please note:** +In the sections below, when it references the `global.yml` file, it +specifically means the one that you can find the definition in the +`aws-deployment-framework-bootstrap` repository, in the +`adf-bootstrap/global.yml` file. Do **NOT** edit the `global.yml` file itself. +Instead create the role using the `global-iam.yml` counterpart. As any updates +to the `global.yml` file get overwritten when ADF itself is updated. + +#### Source roles + +For source provider actions, like CodeCommit and S3, you can define a specific +role to use. Please make sure the `AssumeRolePolicyDocument` of these roles +includes a similar definition to the default `adf-codecommit-role` as created +by ADF. + +You can find the definition of this role the `global.yml` file see [note +above](#custom-roles-for-pipelines). +These roles would need to be created in the account where the source performs +its tasks. +For example, if you use it to fetch the source from a CodeCommit repository, +the role needs to be created in the same account as the repository itself. + +Additionally, the `adf-codepipeline-role` should be granted access to perform +an `sts:AssumeRole` of the custom role you create. This change should be +added to the `adf-bootstrap/deployment/global-iam.yml` file. + +#### Build roles + +For CodeBuild actions, you can define a specific role to use. +Please make sure the `AssumeRolePolicyDocument` of these roles +includes a similar definition to the default `adf-codebuild-role` as created +by ADF in the deployment account. For the custom CodeBuild role, you will need +to grant it the same permissions as the `adf-codebuild-role` to enable it. + +You can find the definition of this role in the +`adf-bootstrap/deployment/global.yml` file. +This custom role should be defined inside the +`adf-bootstrap/deployment/global-iam.yml` file. + +#### Deployment roles + +For deployment provider actions, like CloudFormation and S3, you can define a +specific role to use. + +For all deployment actions, except for CloudFormation, you should take a look +at ADF's role of the `adf-cloudformation-role`. This role is responsible for +performing cross-account operations and instructing the services to kick-off. + +For the CloudFormation action, a separate role is used for the deployment of +CloudFormation Stack operations itself. That is the +`adf-cloudformation-deployment-role`. The `adf-cloudformation-role` in the +target account passes the `adf-cloudformation-deployment-role` to the +CloudFormation service. If you create a custom role for CloudFormation +deployments, you need to ensure that the `adf-cloudformation-role` is granted +`iam:PassRole` permissions for that role to the CloudFormation service only. + +Please make sure the `AssumeRolePolicyDocument` of your custom role +includes a similar definition to the default created by ADF. + +You can find the definition of this role the `global.yml` file see [note +above](#custom-roles-for-pipelines). +These roles would need to be created in the account where it will deploy to. +For example, if you use it to deploy objects to an S3 bucket, +it needs to live in the same account as the S3 bucket itself or be granted +access to the bucket via the bucket policy. + +Additionally, the `adf-codepipeline-role` should be granted access to perform +an `sts:AssumeRole` of the custom role you create. This change should be +added to the `adf-bootstrap/deployment/global-iam.yml` file. + ### Targets Syntax The Deployment Map has a shorthand syntax along with a more detailed version diff --git a/linters/custom-adf-dict.txt b/linters/custom-adf-dict.txt index 3399e5425..a0453c3fa 100644 --- a/linters/custom-adf-dict.txt +++ b/linters/custom-adf-dict.txt @@ -58,6 +58,7 @@ scps sdkman skycolangelom srabidoux +SSEKMS stefanzweifel stubber tfapply diff --git a/samples/sample-cdk-app/buildspec.yml b/samples/sample-cdk-app/buildspec.yml index 77ba0beee..fe41b5a00 100644 --- a/samples/sample-cdk-app/buildspec.yml +++ b/samples/sample-cdk-app/buildspec.yml @@ -9,7 +9,7 @@ phases: python: 3.12 nodejs: 20 commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q - python adf-build/generate_params.py diff --git a/samples/sample-codebuild-vpc/buildspec.yml b/samples/sample-codebuild-vpc/buildspec.yml index fd64374bd..9461fdff1 100644 --- a/samples/sample-codebuild-vpc/buildspec.yml +++ b/samples/sample-codebuild-vpc/buildspec.yml @@ -14,7 +14,7 @@ phases: # # If you want to restrict public access, you can create a local copy # of the pip required packages and use S3 private link. - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q build: diff --git a/samples/sample-ec2-with-codedeploy/buildspec.yml b/samples/sample-ec2-with-codedeploy/buildspec.yml index df3cf5c8a..e1e2e7856 100644 --- a/samples/sample-ec2-with-codedeploy/buildspec.yml +++ b/samples/sample-ec2-with-codedeploy/buildspec.yml @@ -8,7 +8,7 @@ phases: runtime-versions: python: 3.12 commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q - python adf-build/generate_params.py diff --git a/samples/sample-ecr-repository/buildspec.yml b/samples/sample-ecr-repository/buildspec.yml index df3cf5c8a..e1e2e7856 100644 --- a/samples/sample-ecr-repository/buildspec.yml +++ b/samples/sample-ecr-repository/buildspec.yml @@ -8,7 +8,7 @@ phases: runtime-versions: python: 3.12 commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q - python adf-build/generate_params.py diff --git a/samples/sample-ecs-cluster/buildspec.yml b/samples/sample-ecs-cluster/buildspec.yml index df3cf5c8a..e1e2e7856 100644 --- a/samples/sample-ecs-cluster/buildspec.yml +++ b/samples/sample-ecs-cluster/buildspec.yml @@ -8,7 +8,7 @@ phases: runtime-versions: python: 3.12 commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q - python adf-build/generate_params.py diff --git a/samples/sample-ecs-cluster/template.yml b/samples/sample-ecs-cluster/template.yml index ac2ceed18..421593c61 100644 --- a/samples/sample-ecs-cluster/template.yml +++ b/samples/sample-ecs-cluster/template.yml @@ -145,6 +145,9 @@ Resources: - ecs-tasks.amazonaws.com Action: - 'sts:AssumeRole' + Condition: + ArnLike: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:ecs:${AWS::Region}:${AWS::AccountId}:*" Path: / Policies: - PolicyName: AmazonECSTaskExecutionRolePolicy diff --git a/samples/sample-expunge-vpc/buildspec.yml b/samples/sample-expunge-vpc/buildspec.yml index f7747999b..5a981872e 100644 --- a/samples/sample-expunge-vpc/buildspec.yml +++ b/samples/sample-expunge-vpc/buildspec.yml @@ -8,7 +8,7 @@ phases: runtime-versions: python: 3.12 commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q - python adf-build/generate_params.py diff --git a/samples/sample-fargate-node-app/build/generate_parameters.sh b/samples/sample-fargate-node-app/build/generate_parameters.sh index f3ee980ed..c1ef751db 100755 --- a/samples/sample-fargate-node-app/build/generate_parameters.sh +++ b/samples/sample-fargate-node-app/build/generate_parameters.sh @@ -5,6 +5,6 @@ set -e -aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet +aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors pip install -r adf-build/requirements.txt -q python adf-build/generate_params.py diff --git a/samples/sample-iam/buildspec.yml b/samples/sample-iam/buildspec.yml index df3cf5c8a..e1e2e7856 100644 --- a/samples/sample-iam/buildspec.yml +++ b/samples/sample-iam/buildspec.yml @@ -8,7 +8,7 @@ phases: runtime-versions: python: 3.12 commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q - python adf-build/generate_params.py diff --git a/samples/sample-iam/template.yml b/samples/sample-iam/template.yml index b31e63aaf..4669edc68 100644 --- a/samples/sample-iam/template.yml +++ b/samples/sample-iam/template.yml @@ -97,6 +97,9 @@ Resources: - "codedeploy.amazonaws.com" Action: - "sts:AssumeRole" + Condition: + StringEquals: + "aws:SourceAccount": !Ref AWS::AccountId ManagedPolicyArns: - !Sub "arn:${AWS::Partition}:iam::aws:policy/service-role/AWSCodeDeployRole" RoleName: "codedeploy-service-role" diff --git a/samples/sample-mono-repo/apps/alpha/buildspec.yml b/samples/sample-mono-repo/apps/alpha/buildspec.yml index 9c261d94f..7142c2110 100644 --- a/samples/sample-mono-repo/apps/alpha/buildspec.yml +++ b/samples/sample-mono-repo/apps/alpha/buildspec.yml @@ -13,7 +13,7 @@ phases: python: 3.12 commands: - cd $INFRASTRUCTURE_ROOT_DIR - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q build: diff --git a/samples/sample-mono-repo/apps/beta/buildspec.yml b/samples/sample-mono-repo/apps/beta/buildspec.yml index db78d3ddb..626c6c619 100644 --- a/samples/sample-mono-repo/apps/beta/buildspec.yml +++ b/samples/sample-mono-repo/apps/beta/buildspec.yml @@ -13,7 +13,7 @@ phases: python: 3.12 commands: - cd $INFRASTRUCTURE_ROOT_DIR - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q build: diff --git a/samples/sample-rdk-rules/buildspec.yml b/samples/sample-rdk-rules/buildspec.yml index 92b691e34..9a4a753b6 100644 --- a/samples/sample-rdk-rules/buildspec.yml +++ b/samples/sample-rdk-rules/buildspec.yml @@ -8,7 +8,7 @@ phases: python: 3.12 nodejs: 20 commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q - python adf-build/generate_params.py diff --git a/samples/sample-serverless-app/build/generate_parameters.sh b/samples/sample-serverless-app/build/generate_parameters.sh index f3ee980ed..c1ef751db 100755 --- a/samples/sample-serverless-app/build/generate_parameters.sh +++ b/samples/sample-serverless-app/build/generate_parameters.sh @@ -5,6 +5,6 @@ set -e -aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet +aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors pip install -r adf-build/requirements.txt -q python adf-build/generate_params.py diff --git a/samples/sample-service-catalog-product/buildspec.yml b/samples/sample-service-catalog-product/buildspec.yml index df3cf5c8a..e1e2e7856 100644 --- a/samples/sample-service-catalog-product/buildspec.yml +++ b/samples/sample-service-catalog-product/buildspec.yml @@ -8,7 +8,7 @@ phases: runtime-versions: python: 3.12 commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q - python adf-build/generate_params.py diff --git a/samples/sample-terraform/buildspec.yml b/samples/sample-terraform/buildspec.yml index c7be30f9b..b2d15563b 100644 --- a/samples/sample-terraform/buildspec.yml +++ b/samples/sample-terraform/buildspec.yml @@ -10,7 +10,7 @@ env: phases: install: commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - export PATH=$PATH:$(pwd) - bash adf-build/helpers/terraform/install_terraform.sh - pip install --upgrade pip diff --git a/samples/sample-terraform/tf_apply.yml b/samples/sample-terraform/tf_apply.yml index 5fce1c595..1ad731f1b 100644 --- a/samples/sample-terraform/tf_apply.yml +++ b/samples/sample-terraform/tf_apply.yml @@ -5,7 +5,7 @@ version: 0.2 env: variables: - TF_VAR_TARGET_ACCOUNT_ROLE: adf-terraform-role # The IAM Role Terraform will assume to deploy resources + TF_VAR_TARGET_ACCOUNT_ROLE: adf-pipeline-terraform # The IAM Role Terraform will assume to deploy resources TF_IN_AUTOMATION: true TF_CLI_ARGS: "-no-color" TF_STAGE: "apply" diff --git a/samples/sample-terraform/tf_destroy.yml b/samples/sample-terraform/tf_destroy.yml index d2352753f..319a203fc 100644 --- a/samples/sample-terraform/tf_destroy.yml +++ b/samples/sample-terraform/tf_destroy.yml @@ -5,7 +5,7 @@ version: 0.2 env: variables: - TF_VAR_TARGET_ACCOUNT_ROLE: adf-terraform-role # The IAM Role Terraform will assume to deploy resources + TF_VAR_TARGET_ACCOUNT_ROLE: adf-pipeline-terraform # The IAM Role Terraform will assume to deploy resources TF_IN_AUTOMATION: true TF_STAGE: "destroy" TF_CLI_ARGS: "-no-color" diff --git a/samples/sample-terraform/tf_plan.yml b/samples/sample-terraform/tf_plan.yml index b84e9e98e..27c395365 100644 --- a/samples/sample-terraform/tf_plan.yml +++ b/samples/sample-terraform/tf_plan.yml @@ -5,7 +5,7 @@ version: 0.2 env: variables: - TF_VAR_TARGET_ACCOUNT_ROLE: adf-terraform-role # The IAM Role Terraform will assume to deploy resources + TF_VAR_TARGET_ACCOUNT_ROLE: adf-pipeline-terraform # The IAM Role Terraform will assume to deploy resources TF_IN_AUTOMATION: true TF_STAGE: "plan" TF_CLI_ARGS: "-no-color" diff --git a/samples/sample-vpc/buildspec.yml b/samples/sample-vpc/buildspec.yml index df3cf5c8a..e1e2e7856 100644 --- a/samples/sample-vpc/buildspec.yml +++ b/samples/sample-vpc/buildspec.yml @@ -8,7 +8,7 @@ phases: runtime-versions: python: 3.12 commands: - - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --quiet + - aws s3 cp s3://$S3_BUCKET_NAME/adf-build/ adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q - python adf-build/generate_params.py diff --git a/src/account_bootstrapping_jump_role.yml b/src/account_bootstrapping_jump_role.yml new file mode 100644 index 000000000..5a1354a1d --- /dev/null +++ b/src/account_bootstrapping_jump_role.yml @@ -0,0 +1,310 @@ +# Copyright Amazon.com Inc. or its affiliates. +# SPDX-License-Identifier: Apache-2.0 + +AWSTemplateFormatVersion: '2010-09-09' +Transform: 'AWS::Serverless-2016-10-31' +Description: ADF CloudFormation Stack for account bootstrapping jump role + +Parameters: + OrganizationId: + Type: String + MinLength: "1" + + ADFVersion: + Type: String + MinLength: "1" + + LambdaLayer: + Type: String + MinLength: "1" + + CrossAccountAccessRoleName: + Type: String + MinLength: "1" + + DeploymentAccountId: + Type: String + MinLength: "1" + + LogLevel: + Description: >- + At what Log Level the ADF should operate, default is INFO. + Valid options are: DEBUG, INFO, WARN, ERROR, and CRITICAL. + Type: String + Default: "INFO" + AllowedValues: + - DEBUG + - INFO + - WARN + - ERROR + - CRITICAL + + AllowBootstrappingOfManagementAccount: + Description: >- + Would ADF need to bootstrap the Management Account of your AWS + Organization too? If so, set this to "Yes". + + Only set this to "Yes" if a pipeline will deploy to the management + account. Or if you need some of the bootstrap resources in the + management account too. + + Please be careful: if you plan to set this to "Yes", make sure + that the management account is in a dedicated organization unit + that has bare minimum IAM permissions to deploy. Only grant access + to resource types that are required! + + If you set/leave this at "No", make sure the management organization is + in the root of your AWS Organization structure. Or in a dedicated + organization unit and add the organization unit id to the protected + organization unit list via the (ProtectedOUs) parameter. + + If not, leave at the default of "No". + Valid options are: Yes, No + Type: String + Default: "No" + AllowedValues: + - "Yes" + - "No" + + GrantOrgWidePrivilegedBootstrapAccessUntil: + Description: >- + When set at a date in the future, ADF will use the privileged + cross-account access role to bootstrap the accounts. This is useful + in situations where you are reworking the IAM permissions of the + ADF bootstrap stacks (global-iam.yml). In some cases, setting this + in the future might be required to upgrade ADF to newer versions of + ADF too. If an ADF upgrade requires this, it will be clearly described + in the CHANGELOG.md file and the release notes. + + Leave at the configured default to disable privileged bootstrap + access for all accounts. When the date is in the past, only the AWS + Accounts that are accessible to ADF but are not bootstrapped yet will + be allowed access via the privileged cross-account access role. + + Date time format according to ISO 8601 + https://www.w3.org/TR/NOTE-datetime + Type: String + Default: "1900-12-31T23:59:59Z" + AllowedPattern: "\\d{4}-[0-1]\\d-[0-3]\\dT[0-2]\\d:[0-5]\\d:[0-5]\\d([+-][0-2]\\d:[0-5]\\d|Z)" + +Globals: + Function: + Architectures: + - arm64 + Runtime: python3.12 + Timeout: 300 + Tracing: Active + Layers: + - !Ref LambdaLayer + +Conditions: + DenyManagementJumpRoleAccess: !Equals + - !Ref AllowBootstrappingOfManagementAccount + - "Yes" + +Resources: + JumpRole: + Type: "AWS::IAM::Role" + Properties: + Path: "/adf/account-bootstrapping/jump/" + RoleName: "adf-bootstrapping-cross-account-jump-role" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Principal: + AWS: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:root" + Action: "sts:AssumeRole" + Condition: + ArnEquals: + "aws:PrincipalArn": + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/account-bootstrapping/adf-account-bootstrapping-cross-account-deploy-bootstrap" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/account-bootstrapping/adf-account-bootstrapping-update-deployment-resource-policies" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/account-bootstrapping/adf-account-bootstrapping-bootstrap-stack-waiter" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/account-bootstrapping/adf-account-bootstrapping-moved-to-root-cleanup-if-required" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/account-management/adf-account-management-config-account-alias" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/account-management/adf-account-management-delete-default-vpc" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/account-management/adf-account-management-get-account-regions" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap-pipeline/adf-bootstrap-pipeline-codebuild" + Policies: + - PolicyName: "adf-limit-scope-of-jump-role" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Sid: "DenyNonAssumeRoleOperations" + Effect: Deny + NotAction: + - "sts:AssumeRole" + Resource: "*" + - Sid: "DenyAssumeRoleExternalToOrganization" + Effect: Deny + Action: + - "sts:AssumeRole" + Resource: "*" + Condition: + StringNotEquals: + "aws:ResourceOrgID": !Ref OrganizationId + - Sid: "DenyAssumeRoleToUnknownRoles" + Effect: Deny + Action: + - "sts:AssumeRole" + NotResource: + - !Sub "arn:${AWS::Partition}:iam::*:role/adf/bootstrap/adf-bootstrap-update-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::*:role/${CrossAccountAccessRoleName}" + - Sid: "AllowAssumeRoleToLeastPrivilegeUpdateDeploymentRole" + Effect: Allow + Action: + - "sts:AssumeRole" + Resource: + - !Sub "arn:${AWS::Partition}:iam::*:role/adf/bootstrap/adf-bootstrap-update-deployment-role" + Condition: + StringEquals: + "aws:ResourceOrgID": !Ref OrganizationId + - Sid: "GrantOrgWidePrivilegedBootstrapAccessFallback" + Effect: Allow + Action: + - "sts:AssumeRole" + Resource: + - !Sub "arn:${AWS::Partition}:iam::*:role/${CrossAccountAccessRoleName}" + Condition: + DateLessThan: + "aws:CurrentTime": !Ref GrantOrgWidePrivilegedBootstrapAccessUntil + + JumpRoleProtectManagementAccountPolicy: + Type: "AWS::IAM::ManagedPolicy" + Condition: "DenyManagementJumpRoleAccess" + Properties: + Description: >- + This policy gets added to the Jump Role if ADF is not allowed to + bootstrap the management account. + PolicyDocument: + Version: "2012-10-17" + Statement: + - Sid: "DenyAssumeRoleToManagementAccount" + Effect: Deny + Action: + - "sts:AssumeRole" + Resource: "*" + Condition: + StringEquals: + "aws:ResourceAccount": !Ref AWS::AccountId + Roles: + - !Ref JumpRole + + JumpRoleManagedPolicy: + Type: "AWS::IAM::ManagedPolicy" + Properties: + Description: "The managed jump role policy that gets updated dynamically by the JumpRoleManager function" + PolicyDocument: + Version: "2012-10-17" + Statement: + # An empty list of statements is not allowed, hence creating + # a dummy statement that does not have any effect + - Sid: "EmptyClause" + Effect: Deny + Action: + # sts:AssumeRoleWithWebIdentity is not allowed by the + # inline policy of the jump role anyway. + # Hence blocking this would not cause any problems. + # + # It should not deny sts:AssumeRole here, as it might be granted + # via the GrantOrgWidePrivilegedBootstrapAccessFallback statement + - "sts:AssumeRoleWithWebIdentity" + Resource: "*" + Roles: + - !Ref JumpRole + + JumpRoleManagerExecutionRole: + Type: "AWS::IAM::Role" + Properties: + Path: "/adf/account-bootstrapping/jump-manager/" + RoleName: "adf-bootstrapping-jump-manager-role" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Principal: + Service: + - lambda.amazonaws.com + Action: "sts:AssumeRole" + Policies: + - PolicyName: "adf-lambda-create-account-policy" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Action: + - "logs:CreateLogGroup" + - "logs:CreateLogStream" + - "logs:PutLogEvents" + - "xray:PutTelemetryRecords" + - "xray:PutTraceSegments" + - "cloudwatch:PutMetricData" + - "codepipeline:PutJobSuccessResult" + - "codepipeline:PutJobFailureResult" + Resource: "*" + - Effect: "Allow" + Action: "lambda:GetLayerVersion" + Resource: !Ref LambdaLayer + - Effect: Allow + Action: + - "organizations:ListAccounts" + - "organizations:ListParents" + - "organizations:ListRoots" + Resource: "*" + - Effect: Allow + Action: + - "organizations:ListAccountsForParent" + Resource: + - !Sub "arn:${AWS::Partition}:organizations::${AWS::AccountId}:root/${OrganizationId}/r-*" + - Effect: Allow + Action: + - "sts:AssumeRole" + Resource: + - !Sub "arn:${AWS::Partition}:iam::*:role/adf/bootstrap/adf-bootstrap-test-role" + Condition: + StringEquals: + "aws:ResourceOrgID": !Ref OrganizationId + - Effect: Allow + Action: ssm:GetParameter + Resource: + - !Sub "arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/adf/protected" + - !Sub "arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/adf/moves/to_root/action" + - Effect: Allow + Action: + - "iam:CreatePolicyVersion" + - "iam:DeletePolicyVersion" + - "iam:ListPolicyVersions" + Resource: + - !Ref JumpRoleManagedPolicy + + JumpRoleManagerFunction: + Type: 'AWS::Serverless::Function' + Properties: + Handler: main.lambda_handler + Description: ADF - Account Bootstrapping - Jump Role Manager + CodeUri: lambda_codebase/jump_role_manager + Environment: + Variables: + ADF_JUMP_MANAGED_POLICY_ARN: !Ref JumpRoleManagedPolicy + AWS_PARTITION: !Ref AWS::Partition + CROSS_ACCOUNT_ACCESS_ROLE_NAME: !Ref CrossAccountAccessRoleName + DEPLOYMENT_ACCOUNT_ID: !Ref DeploymentAccountId + MANAGEMENT_ACCOUNT_ID: !Ref AWS::AccountId + ADF_VERSION: !Ref ADFVersion + ADF_LOG_LEVEL: !Ref LogLevel + FunctionName: adf-bootstrapping-jump-role-manager + Role: !GetAtt JumpRoleManagerExecutionRole.Arn + Metadata: + BuildMethod: python3.12 + +Outputs: + RoleArn: + Value: !GetAtt JumpRole.Arn + + ManagerFunctionArn: + Value: !GetAtt JumpRoleManagerFunction.Arn + + ManagerFunctionName: + Value: !Ref JumpRoleManagerFunction diff --git a/src/lambda_codebase/account/handler.py b/src/lambda_codebase/account/handler.py index c75d08d2f..15281d29d 100644 --- a/src/lambda_codebase/account/handler.py +++ b/src/lambda_codebase/account/handler.py @@ -28,6 +28,8 @@ def lambda_handler(event, _context, prior_error=err): "StackId": event["StackId"], "Reason": str(prior_error), } + if not event["ResponseURL"].lower().startswith('http'): + raise ValueError('ResponseURL is forbidden') from None with urlopen( Request( event["ResponseURL"], diff --git a/src/lambda_codebase/account/main.py b/src/lambda_codebase/account/main.py index 2edfc93eb..5e00a86bd 100644 --- a/src/lambda_codebase/account/main.py +++ b/src/lambda_codebase/account/main.py @@ -220,13 +220,27 @@ def ensure_account( "Using existing deployment account as specified %s.", existing_account_id, ) - if is_update and not ssm_deployment_account_id: + if not ssm_deployment_account_id: LOGGER.info( - "The %s param was not found, creating it as we are " - "updating ADF", + "The %s parameter was not found, creating it", DEPLOYMENT_ACCOUNT_ID_PARAM_PATH, ) _set_deployment_account_id_parameter(existing_account_id) + parameter_mismatch = ( + ssm_deployment_account_id + and ssm_deployment_account_id != existing_account_id + ) + if parameter_mismatch: + raise RuntimeError( + "Failed to configure the deployment account. " + f"The {DEPLOYMENT_ACCOUNT_ID_PARAM_PATH} parameter has " + f"account id {ssm_deployment_account_id} configured, while " + f"the current operation requests using {existing_account_id} " + "instead. These need to match, if you are sure you want to " + f"use {existing_account_id}, please update or delete the " + f"{DEPLOYMENT_ACCOUNT_ID_PARAM_PATH} parameter in AWS Systems " + "Manager Parameter Store and try again.", + ) return existing_account_id, False # If no existing account ID was provided, check if the ID is stored in diff --git a/src/lambda_codebase/account/tests/test_main.py b/src/lambda_codebase/account/tests/test_main.py index 3ee29eadc..a9f7838fa 100644 --- a/src/lambda_codebase/account/tests/test_main.py +++ b/src/lambda_codebase/account/tests/test_main.py @@ -61,6 +61,80 @@ def test_deployment_account_given( assert returned_account_id == account_id assert not created + logger.info.assert_has_calls([ + call( + 'Using existing deployment account as specified %s.', + account_id, + ), + call( + 'The %s parameter was not found, creating it', + DEPLOYMENT_ACCOUNT_ID_PARAM_PATH, + ), + ]) + concur_mod_fn.assert_not_called() + wait_on_fn.assert_not_called() + ssm_client.get_parameter.assert_called_once_with( + Name=DEPLOYMENT_ACCOUNT_ID_PARAM_PATH, + ) + ssm_client.put_parameter.assert_called_once_with( + Name=DEPLOYMENT_ACCOUNT_ID_PARAM_PATH, + Value=account_id, + Description=SSM_PARAMETER_ADF_DESCRIPTION, + Type="String", + Overwrite=True, + ) + find_orgs_api.assert_not_called() + org_client.create_account.assert_not_called() + + +@patch("main.ORGANIZATION_CLIENT") +@patch("main.SSM_CLIENT") +@patch("main._find_deployment_account_via_orgs_api") +@patch("main._wait_on_account_creation") +@patch("main._handle_concurrent_modification") +@patch("main.LOGGER") +def test_deployment_account_given_mismatch_ssm_param( + logger, concur_mod_fn, wait_on_fn, find_orgs_api, ssm_client, org_client +): + account_id = "123456789012" + ssm_account_id = "111111111111" + account_name = "test-deployment-account" + account_email = "test@amazon.com" + cross_account_access_role_name = "some-role" + ssm_client.exceptions.ParameterNotFound = ParameterNotFound + org_client.exceptions.ConcurrentModificationException = ( + ConcurrentModificationException + ) + + ssm_client.get_parameter.return_value = { + "Parameter": { + "Value": ssm_account_id, + } + } + find_orgs_api.return_value = "" + + with pytest.raises(RuntimeError) as excinfo: + ensure_account( + account_id, + account_name, + account_email, + cross_account_access_role_name, + is_update=False, + ) + + error_message = str(excinfo.value) + correct_error_message = ( + "Failed to configure the deployment account. " + f"The {DEPLOYMENT_ACCOUNT_ID_PARAM_PATH} parameter has " + f"account id {ssm_account_id} configured, while " + f"the current operation requests using {account_id} " + "instead. These need to match, if you are sure you want to " + f"use {account_id}, please update or delete the " + f"{DEPLOYMENT_ACCOUNT_ID_PARAM_PATH} parameter in AWS Systems " + "Manager Parameter Store and try again." + ) + assert error_message.find(correct_error_message) >= 0 + logger.info.assert_called_once_with( 'Using existing deployment account as specified %s.', account_id, @@ -112,7 +186,7 @@ def test_deployment_account_given_on_update_no_params( account_id, ), call( - 'The %s param was not found, creating it as we are updating ADF', + 'The %s parameter was not found, creating it', DEPLOYMENT_ACCOUNT_ID_PARAM_PATH, ), ]) diff --git a/src/lambda_codebase/account_bootstrap.py b/src/lambda_codebase/account_bootstrap.py index 54e212d1d..03b28db9e 100644 --- a/src/lambda_codebase/account_bootstrap.py +++ b/src/lambda_codebase/account_bootstrap.py @@ -30,6 +30,7 @@ S3_BUCKET = os.environ["S3_BUCKET_NAME"] REGION_DEFAULT = os.environ["AWS_REGION"] PARTITION = get_partition(REGION_DEFAULT) +MANAGEMENT_ACCOUNT_ID = os.environ["MANAGEMENT_ACCOUNT_ID"] LOGGER = configure_logger(__name__) DEPLOY_TIME_IN_MS = 5 * 60 * 1000 @@ -44,15 +45,15 @@ def configure_generic_account(sts, event, region, role): try: deployment_account_id = event['deployment_account_id'] cross_account_access_role = event['cross_account_access_role'] - role_arn = ( - f'arn:{PARTITION}:iam::{deployment_account_id}:' - f'role/{cross_account_access_role}' - ) - deployment_account_role = sts.assume_cross_account_role( - role_arn=role_arn, - role_session_name='configure_generic', + deployment_account_role = sts.assume_bootstrap_deployment_role( + PARTITION, + MANAGEMENT_ACCOUNT_ID, + deployment_account_id, + cross_account_access_role, + 'configure_generic', ) + parameter_store_deployment_account = ParameterStore( event['deployment_account_region'], deployment_account_role, @@ -68,7 +69,7 @@ def configure_generic_account(sts, event, region, role): f'cross_region/s3_regional_bucket/{region}', ) org_stage = parameter_store_deployment_account.fetch_parameter( - '/adf/org/stage' + 'org/stage', ) except (ClientError, ParameterNotFoundError): raise GenericAccountConfigureError( @@ -82,7 +83,16 @@ def configure_generic_account(sts, event, region, role): 'deployment_account_id', event['deployment_account_id'], ) - parameter_store_target_account.put_parameter('/adf/org/stage', org_stage) + if region == event['deployment_account_region']: + parameter_store_target_account.put_parameter( + 'management_account_id', + MANAGEMENT_ACCOUNT_ID, + ) + parameter_store_target_account.put_parameter( + 'bootstrap_templates_bucket', + S3_BUCKET, + ) + parameter_store_target_account.put_parameter('org/stage', org_stage) def configure_management_account_parameters(event): @@ -127,13 +137,6 @@ def configure_deployment_account_parameters(event, role): parameter_store.put_parameter(key, value) -def is_inter_ou_account_move(event): - return ( - not event["source_ou_id"].startswith('r-') - and not event["destination_ou_id"].startswith('r-') - ) - - def lambda_handler(event, context): try: return _lambda_handler(event, context) @@ -150,13 +153,13 @@ def _lambda_handler(event, context): account_id = event["account_id"] cross_account_access_role = event["cross_account_access_role"] - role_arn = ( - f'arn:{PARTITION}:iam::{account_id}:role/{cross_account_access_role}' - ) - role = sts.assume_cross_account_role( - role_arn=role_arn, - role_session_name='management_lambda', + role = sts.assume_bootstrap_deployment_role( + PARTITION, + MANAGEMENT_ACCOUNT_ID, + account_id, + cross_account_access_role, + 'management_lambda', ) if event['is_deployment_account']: @@ -207,8 +210,6 @@ def _lambda_handler(event, context): s3_key_path=event["full_path"], account_id=account_id ) - if is_inter_ou_account_move(event): - cloudformation.delete_all_base_stacks(True) # override Wait cloudformation.create_stack() if region == event["deployment_account_region"]: cloudformation.create_iam_stack() diff --git a/src/lambda_codebase/account_processing/configure_account_alias.py b/src/lambda_codebase/account_processing/configure_account_alias.py index f24dcbc69..ce3717e6d 100644 --- a/src/lambda_codebase/account_processing/configure_account_alias.py +++ b/src/lambda_codebase/account_processing/configure_account_alias.py @@ -16,8 +16,9 @@ patch_all() LOGGER = configure_logger(__name__) -ADF_ROLE_NAME = os.getenv("ADF_ROLE_NAME") +ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME = os.getenv("ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME") AWS_PARTITION = os.getenv("AWS_PARTITION") +MANAGEMENT_ACCOUNT_ID = os.getenv('MANAGEMENT_ACCOUNT_ID') def delete_account_aliases(account, iam_client, current_aliases): @@ -75,8 +76,11 @@ def lambda_handler(event, _): if event.get("alias"): sts = STS() account_id = event.get("account_id") - role = sts.assume_cross_account_role( - f"arn:{AWS_PARTITION}:iam::{account_id}:role/{ADF_ROLE_NAME}", + role = sts.assume_bootstrap_deployment_role( + AWS_PARTITION, + MANAGEMENT_ACCOUNT_ID, + account_id, + ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME, "adf_account_alias_config", ) ensure_account_has_alias(event, role.client("iam")) diff --git a/src/lambda_codebase/account_processing/create_account.py b/src/lambda_codebase/account_processing/create_account.py index ab4368228..55d6aa36e 100644 --- a/src/lambda_codebase/account_processing/create_account.py +++ b/src/lambda_codebase/account_processing/create_account.py @@ -15,16 +15,16 @@ patch_all() LOGGER = configure_logger(__name__) -ADF_ROLE_NAME = os.getenv("ADF_ROLE_NAME") +ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME = os.getenv("ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME") -def create_account(account, adf_role_name, org_client): +def create_account(account, adf_privileged_role_name, org_client): LOGGER.info("Creating account %s", account.get('account_full_name')) allow_billing = "ALLOW" if account.get("allow_billing", False) else "DENY" response = org_client.create_account( Email=account.get("email"), AccountName=account.get("account_full_name"), - RoleName=adf_role_name, # defaults to OrganizationAccountAccessRole + RoleName=adf_privileged_role_name, # defaults to OrganizationAccountAccessRole IamUserAccessToBilling=allow_billing, )["CreateAccountStatus"] while response["State"] == "IN_PROGRESS": @@ -44,4 +44,4 @@ def create_account(account, adf_role_name, org_client): def lambda_handler(event, _): org_client = boto3.client("organizations") - return create_account(event, ADF_ROLE_NAME, org_client) + return create_account(event, ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME, org_client) diff --git a/src/lambda_codebase/account_processing/delete_default_vpc.py b/src/lambda_codebase/account_processing/delete_default_vpc.py index 586f278a4..038584a4c 100644 --- a/src/lambda_codebase/account_processing/delete_default_vpc.py +++ b/src/lambda_codebase/account_processing/delete_default_vpc.py @@ -17,14 +17,20 @@ patch_all() LOGGER = configure_logger(__name__) -ADF_ROLE_NAME = os.getenv("ADF_ROLE_NAME") +ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME = os.getenv( + "ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME", +) AWS_PARTITION = os.getenv("AWS_PARTITION") +MANAGEMENT_ACCOUNT_ID = os.getenv('MANAGEMENT_ACCOUNT_ID') def assume_role(account_id): sts = STS() - return sts.assume_cross_account_role( - f"arn:{AWS_PARTITION}:iam::{account_id}:role/{ADF_ROLE_NAME}", + return sts.assume_bootstrap_deployment_role( + AWS_PARTITION, + MANAGEMENT_ACCOUNT_ID, + account_id, + ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME, "adf_delete_default_vpc", ) diff --git a/src/lambda_codebase/account_processing/get_account_regions.py b/src/lambda_codebase/account_processing/get_account_regions.py index 76487153a..f2c7c7fee 100644 --- a/src/lambda_codebase/account_processing/get_account_regions.py +++ b/src/lambda_codebase/account_processing/get_account_regions.py @@ -15,8 +15,11 @@ patch_all() LOGGER = configure_logger(__name__) -ADF_ROLE_NAME = os.getenv("ADF_ROLE_NAME") +ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME = os.getenv( + "ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME", +) AWS_PARTITION = os.getenv("AWS_PARTITION") +MANAGEMENT_ACCOUNT_ID = os.getenv('MANAGEMENT_ACCOUNT_ID') def get_default_regions_for_account(ec2_client): @@ -40,8 +43,11 @@ def lambda_handler(event, _): LOGGER.info("Fetching Default regions %s", event.get("account_full_name")) sts = STS() account_id = event.get("account_id") - role = sts.assume_cross_account_role( - f"arn:{AWS_PARTITION}:iam::{account_id}:role/{ADF_ROLE_NAME}", + role = sts.assume_bootstrap_deployment_role( + AWS_PARTITION, + MANAGEMENT_ACCOUNT_ID, + account_id, + ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME, "adf_account_get_regions", ) default_regions = get_default_regions_for_account(role.client("ec2")) diff --git a/src/lambda_codebase/account_processing/requirements.txt b/src/lambda_codebase/account_processing/requirements.txt index 0d3022cc7..7555f7199 100644 --- a/src/lambda_codebase/account_processing/requirements.txt +++ b/src/lambda_codebase/account_processing/requirements.txt @@ -1,3 +1,5 @@ aws-xray-sdk==2.13.0 +boto3==1.34.80 +botocore==1.34.80 pyyaml~=6.0.1 tenacity==8.2.3 diff --git a/src/lambda_codebase/cleanup_legacy_stacks/cleanup_legacy_stacks.py b/src/lambda_codebase/cleanup_legacy_stacks/cleanup_legacy_stacks.py new file mode 100644 index 000000000..9f434d2ba --- /dev/null +++ b/src/lambda_codebase/cleanup_legacy_stacks/cleanup_legacy_stacks.py @@ -0,0 +1,90 @@ +# Copyright Amazon.com Inc. or its affiliates. +# SPDX-License-Identifier: MIT-0 + +# pylint: skip-file + +""" +Checks if legacy specific legacy bootstrap stacks exists. +If they do, they are cleaned up automatically. +""" + +import os + +import boto3 +from cfn_custom_resource import ( # pylint: disable=unused-import + lambda_handler, + create, + update, + delete, +) + +from cloudformation import CloudFormation, StackProperties +from logger import configure_logger + +ACCOUNT_ID = os.environ["MANAGEMENT_ACCOUNT_ID"] +DEPLOYMENT_REGION = os.environ["DEPLOYMENT_REGION"] +ADF_GLOBAL_ADF_BUILD_STACK_NAME = 'adf-global-base-adf-build' + +LOGGER = configure_logger(__name__) + + +def delete_adf_build_stack(): + cloudformation = CloudFormation( + region=DEPLOYMENT_REGION, + deployment_account_region=DEPLOYMENT_REGION, + role=boto3, + stack_name=ADF_GLOBAL_ADF_BUILD_STACK_NAME, + wait=True, + account_id=ACCOUNT_ID, + ) + LOGGER.debug( + '%s in %s - Checking if stack exists: %s', + ACCOUNT_ID, + DEPLOYMENT_REGION, + ADF_GLOBAL_ADF_BUILD_STACK_NAME, + ) + stack_status = cloudformation.get_stack_status() + if cloudformation.get_stack_status(): + if stack_status not in StackProperties.clean_stack_status: + raise RuntimeError( + 'Please remove stack %s in %s manually, state %s implies that ' + 'it cannot be deleted automatically. ADF cannot be installed ' + 'or updated until this stack is removed.', + ADF_GLOBAL_ADF_BUILD_STACK_NAME, + DEPLOYMENT_REGION, + stack_status, + ) + + cloudformation.delete_stack( + stack_name=ADF_GLOBAL_ADF_BUILD_STACK_NAME, + ) + LOGGER.debug( + '%s in %s - Stack deleted successfully: %s', + ACCOUNT_ID, + DEPLOYMENT_REGION, + ADF_GLOBAL_ADF_BUILD_STACK_NAME, + ) + else: + LOGGER.debug( + '%s in %s - Stack does not exist: %s', + ACCOUNT_ID, + DEPLOYMENT_REGION, + ADF_GLOBAL_ADF_BUILD_STACK_NAME, + ) + + +@create() +def create_(event, _context): + delete_adf_build_stack() + return event.get("PhysicalResourceId"), {} + + +@update() +def update_(event, _context): + delete_adf_build_stack() + return event.get("PhysicalResourceId"), {} + + +@delete() +def delete_(_event, _context): + pass diff --git a/src/lambda_codebase/cleanup_legacy_stacks/handler.py b/src/lambda_codebase/cleanup_legacy_stacks/handler.py new file mode 100644 index 000000000..e0111c312 --- /dev/null +++ b/src/lambda_codebase/cleanup_legacy_stacks/handler.py @@ -0,0 +1,47 @@ +# Copyright Amazon.com Inc. or its affiliates. +# SPDX-License-Identifier: MIT-0 + +""" +The Cleanup Legacy Stacks Handler that is called when ADF is installed or +updated remove previous ADF stacks in the management account if these +were to exist. +""" + +try: + from cleanup_legacy_stacks import lambda_handler # pylint: disable=unused-import +except Exception as err: # pylint: disable=broad-except + import os + import logging + from urllib.request import Request, urlopen + import json + + LOGGER = logging.getLogger(__name__) + LOGGER.setLevel(os.environ.get("ADF_LOG_LEVEL", logging.INFO)) + + def lambda_handler(event, _context, prior_error=err): + payload = { + "LogicalResourceId": event["LogicalResourceId"], + "PhysicalResourceId": event.get( + "PhysicalResourceId", + "NOT_YET_CREATED", + ), + "Status": "FAILED", + "RequestId": event["RequestId"], + "StackId": event["StackId"], + "Reason": str(prior_error), + } + if not event["ResponseURL"].lower().startswith('http'): + raise ValueError('ResponseURL is forbidden') from None + with urlopen( + Request( + event["ResponseURL"], + data=json.dumps(payload).encode(), + headers={"content-type": ""}, + method="PUT", + ) + ) as response: + response_body = response.read().decode("utf-8") + LOGGER.debug( + "Response: %s", + response_body, + ) diff --git a/src/lambda_codebase/cleanup_legacy_stacks/requirements.txt b/src/lambda_codebase/cleanup_legacy_stacks/requirements.txt new file mode 100644 index 000000000..70f2daef7 --- /dev/null +++ b/src/lambda_codebase/cleanup_legacy_stacks/requirements.txt @@ -0,0 +1,2 @@ +boto3==1.34.80 +cfn-custom-resource~=1.0.1 diff --git a/src/lambda_codebase/cross_region_bucket/handler.py b/src/lambda_codebase/cross_region_bucket/handler.py index 636e296c0..33f51650c 100644 --- a/src/lambda_codebase/cross_region_bucket/handler.py +++ b/src/lambda_codebase/cross_region_bucket/handler.py @@ -29,6 +29,8 @@ def lambda_handler(event, _context, prior_error=err): "StackId": event["StackId"], "Reason": str(prior_error), } + if not event["ResponseURL"].lower().startswith('http'): + raise ValueError('ResponseURL is forbidden') from None with urlopen( Request( event["ResponseURL"], diff --git a/src/lambda_codebase/cross_region_bucket/main.py b/src/lambda_codebase/cross_region_bucket/main.py index ae1223471..72d4785eb 100644 --- a/src/lambda_codebase/cross_region_bucket/main.py +++ b/src/lambda_codebase/cross_region_bucket/main.py @@ -81,6 +81,7 @@ def create_(event: Mapping[str, Any], _context: Any) -> CloudFormationResponse: bucket_name_prefix = event["ResourceProperties"]["BucketNamePrefix"] bucket_name, created = ensure_bucket(region, bucket_name_prefix) ensure_bucket_encryption(bucket_name, region) + ensure_bucket_ownership_controls(bucket_name, region) ensure_bucket_has_no_public_access(bucket_name, region) if policy: ensure_bucket_policy(bucket_name, region, policy) @@ -97,6 +98,7 @@ def update_(event: Mapping[str, Any], _context: Any) -> CloudFormationResponse: bucket_name_prefix = event["ResourceProperties"]["BucketNamePrefix"] bucket_name, created = ensure_bucket(region, bucket_name_prefix) ensure_bucket_encryption(bucket_name, region) + ensure_bucket_ownership_controls(bucket_name, region) ensure_bucket_has_no_public_access(bucket_name, region) if policy: ensure_bucket_policy(bucket_name, region, policy) @@ -196,6 +198,20 @@ def ensure_bucket_encryption(bucket_name: str, region: str) -> None: ) +def ensure_bucket_ownership_controls(bucket_name: str, region: str) -> None: + s3_client = get_s3_client(region) + s3_client.put_bucket_ownership_controls( + Bucket=bucket_name, + OwnershipControls={ + "Rules": [ + { + "ObjectOwnership": "BucketOwnerEnforced", + }, + ], + }, + ) + + def ensure_bucket_has_no_public_access(bucket_name: str, region: str) -> None: s3_client = get_s3_client(region) s3_client.put_public_access_block( @@ -217,11 +233,18 @@ def ensure_bucket_policy( partition = get_partition(region) s3_client = get_s3_client(region) + bucket_arn = f"arn:{partition}:s3:::{bucket_name}" for action in policy["Statement"]: - action["Resource"] = [ - f"arn:{partition}:s3:::{bucket_name}", - f"arn:{partition}:s3:::{bucket_name}/*", - ] + if action.get("Resource"): + action["Resource"] = list(map( + lambda res: res.replace('{bucket_arn}', bucket_arn), + action["Resource"], + )) + else: + action["Resource"] = [ + bucket_arn, + f"{bucket_arn}/*", + ] s3_client.put_bucket_policy(Bucket=bucket_name, Policy=json.dumps(policy)) diff --git a/src/lambda_codebase/deployment_account_config.py b/src/lambda_codebase/deployment_account_config.py deleted file mode 100644 index 4da722330..000000000 --- a/src/lambda_codebase/deployment_account_config.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright Amazon.com Inc. or its affiliates. -# SPDX-License-Identifier: MIT-0 - -# pylint: skip-file - -""" -Executes as part of the bootstrap process when the Deployment Account -is initially created and moved into its OU. This step creates a AWS -CloudFormation stack on the management account (containing IAM roles). - -It is deployed in the same region defined as the Deployment Account -Region that allows the DeploymentAccount access to query AWS -Organizations when it needs to create pipelines. -""" - -import os - -import boto3 - -from cloudformation import CloudFormation -from s3 import S3 - -S3_BUCKET = os.environ["S3_BUCKET_NAME"] -MANAGEMENT_ACCOUNT_ID = os.environ["MANAGEMENT_ACCOUNT_ID"] -REGION_DEFAULT = os.environ["AWS_REGION"] - - -def lambda_handler(event, _): - s3 = S3(region=REGION_DEFAULT, bucket=S3_BUCKET) - - cloudformation = CloudFormation( - region=event['deployment_account_region'], - deployment_account_region=event['deployment_account_region'], - role=boto3, - wait=True, - stack_name=None, - s3=s3, - s3_key_path='adf-build', - account_id=event["account_id"] - ) - cloudformation.create_stack() - - return event diff --git a/src/lambda_codebase/event.py b/src/lambda_codebase/event.py index 1bd49f03a..9a3b66d43 100644 --- a/src/lambda_codebase/event.py +++ b/src/lambda_codebase/event.py @@ -13,7 +13,8 @@ from errors import ParameterNotFoundError, RootOUIDError DEPLOYMENT_ACCOUNT_OU_NAME = 'deployment' -DEPLOYMENT_ACCOUNT_S3_BUCKET = os.environ["DEPLOYMENT_ACCOUNT_BUCKET"] +SHARED_MODULES_BUCKET = os.environ["SHARED_MODULES_BUCKET"] +BOOTSTRAP_TEMPLATES_BUCKET = os.environ["S3_BUCKET_NAME"] ADF_VERSION = os.environ["ADF_VERSION"] ADF_LOG_LEVEL = os.environ["ADF_LOG_LEVEL"] @@ -125,7 +126,6 @@ def create_output_object(self, account_path): 'adf_log_level': ADF_LOG_LEVEL, 'adf_version': ADF_VERSION, 'cross_account_access_role': self.cross_account_access_role, - 'deployment_account_bucket': DEPLOYMENT_ACCOUNT_S3_BUCKET, 'deployment_account_id': self.deployment_account_id, 'management_account_id': organization_information.get( "organization_management_account_id" @@ -135,6 +135,8 @@ def create_output_object(self, account_path): 'organization_id': organization_information.get( "organization_id" ), + 'shared_modules_bucket': SHARED_MODULES_BUCKET, + 'bootstrap_templates_bucket': BOOTSTRAP_TEMPLATES_BUCKET, 'extensions/terraform/enabled': ( self._read_parameter( 'extensions/terraform/enabled', diff --git a/src/lambda_codebase/generic_account_config.py b/src/lambda_codebase/generic_account_config.py index fdb0c8cb2..5f0af6d9e 100644 --- a/src/lambda_codebase/generic_account_config.py +++ b/src/lambda_codebase/generic_account_config.py @@ -21,6 +21,7 @@ LOGGER = configure_logger(__name__) REGION_DEFAULT = os.getenv('AWS_REGION') +MANAGEMENT_ACCOUNT_ID = os.getenv('MANAGEMENT_ACCOUNT_ID') def lambda_handler(event, _): @@ -30,11 +31,11 @@ def lambda_handler(event, _): partition = get_partition(REGION_DEFAULT) cross_account_access_role = event.get('cross_account_access_role') - role = sts.assume_cross_account_role( - ( - f'arn:{partition}:iam::{deployment_account_id}:' - f'role/{cross_account_access_role}' - ), + role = sts.assume_bootstrap_deployment_role( + partition, + MANAGEMENT_ACCOUNT_ID, + deployment_account_id, + cross_account_access_role, 'step_function', ) diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/example-global-iam.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/example-global-iam.yml index 39e114eac..ca582e1e2 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/example-global-iam.yml +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/example-global-iam.yml @@ -44,9 +44,8 @@ Resources: - Effect: Allow Sid: "CloudFormation" Action: - # These are examples, please update these to the least privilege policy required: - - "s3:*" - - "ecr:*" + # An example action, please update these to the least privilege policy required: + - "cloudwatch:PutMetricAlarm" Resource: - "*" Roles: @@ -165,6 +164,7 @@ Resources: # # add this policy # Type: AWS::IAM::Role # Properties: +# Path: / # RoleName: "adf-terraform-role" # AssumeRolePolicyDocument: # Version: "2012-10-17" @@ -176,7 +176,6 @@ Resources: # - !Sub arn:aws:iam::${AWS::AccountId}:role/adf-codebuild-role # Action: # - sts:AssumeRole -# Path: / # ADFTerraformPolicy: # Type: AWS::IAM::Policy # Properties: diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/global.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/global.yml index ef395e36d..0665d73ba 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/global.yml +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/global.yml @@ -20,7 +20,12 @@ Parameters: SharedModulesBucket: Type: "AWS::SSM::Parameter::Value" - Default: /adf/deployment_account_bucket + Default: /adf/shared_modules_bucket + + BootstrapTemplatesBucketName: + Type: "AWS::SSM::Parameter::Value" + Description: Bootstrap Templates Bucket Name + Default: /adf/bootstrap_templates_bucket OrganizationId: Type: "AWS::SSM::Parameter::Value" @@ -82,12 +87,15 @@ Resources: Type: "AWS::Serverless::LayerVersion" Properties: ContentUri: "../../adf-build/shared/python" + CompatibleArchitectures: + - arm64 CompatibleRuntimes: - python3.12 Description: "Shared Lambda Layer between management and deployment account" LayerName: adf_shared_layer Metadata: BuildMethod: python3.12 + BuildArchitecture: arm64 KMSKey: Type: AWS::KMS::Key @@ -132,7 +140,8 @@ Resources: - kms:DescribeKey - kms:Encrypt - kms:GenerateDataKey* - - kms:ReEncrypt* + - kms:ReEncryptFrom + - kms:ReEncryptTo Resource: "*" Condition: StringEquals: @@ -144,9 +153,22 @@ Resources: Principal: Service: - sns.amazonaws.com - - events.amazonaws.com - codecommit.amazonaws.com Resource: "*" + Condition: + StringEquals: + "aws:SourceAccount": !Ref AWS::AccountId + - Action: + - kms:Decrypt + - kms:GenerateDataKey* + Effect: Allow + Principal: + Service: + - events.amazonaws.com + Resource: "*" + Condition: + ArnLike: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:events:${AWS::Region}:${AWS::AccountId}:rule/*" KMSAlias: Type: AWS::KMS::Alias @@ -185,9 +207,9 @@ Resources: LambdaLayer: !Ref ADFSharedPythonLambdaLayerVersion ADFVersion: !Ref ADFVersion OrganizationId: !Ref OrganizationId - CrossAccountAccessRole: !Ref CrossAccountAccessRole PipelineBucket: !Ref PipelineBucket - RootAccountId: !Ref ManagementAccountId + PipelineBucketKmsKeyArn: !GetAtt KMSKey.Arn + ManagementAccountId: !Ref ManagementAccountId CodeBuildImage: !Ref Image CodeBuildComputeType: !Ref ComputeType SharedModulesBucket: !Ref SharedModulesBucket @@ -198,6 +220,7 @@ Resources: CodeCommitRole: Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-codecommit-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -210,11 +233,12 @@ Resources: - Effect: Allow Principal: Service: - - events.amazonaws.com - codepipeline.amazonaws.com Action: - sts:AssumeRole - Path: / + Condition: + StringEqualsIfExists: + "aws:SourceAccount": !Ref AWS::AccountId CodeCommitPolicy: Type: AWS::IAM::Policy @@ -225,28 +249,27 @@ Resources: Statement: - Effect: Allow Action: - - "codecommit:BatchGetRepositories" - "codecommit:CancelUploadArchive" - - "codecommit:Get*" + - "codecommit:GetBranch" + - "codecommit:GetCommit" + - "codecommit:GetUploadArchiveStatus" - "codecommit:GitPull" - - "codecommit:List*" - "codecommit:UploadArchive" - "codepipeline:StartPipelineExecution" - "events:PutEvents" - - "s3:Get*" - - "s3:List*" - - "s3:Put*" Resource: "*" + - Effect: Allow + Action: + - "s3:PutObject" + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* - Effect: Allow Action: - "kms:Decrypt" - - "kms:Describe*" - - "kms:DescribeKey" - "kms:Encrypt" - - "kms:GenerateDataKey*" - - "kms:Get*" - - "kms:List*" - - "kms:ReEncrypt*" + - "kms:GenerateDataKey" + - "kms:ReEncryptFrom" + - "kms:ReEncryptTo" Resource: !GetAtt KMSKey.Arn Roles: - !Ref CodeCommitRole @@ -254,6 +277,7 @@ Resources: CodeBuildRole: Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-codebuild-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -264,6 +288,9 @@ Resources: - codebuild.amazonaws.com Action: - sts:AssumeRole + Condition: + ArnLike: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:codebuild:${AWS::Region}:${AWS::AccountId}:project/*" CodeBuildRolePolicy: Type: AWS::IAM::Policy @@ -272,45 +299,17 @@ Resources: PolicyDocument: Version: "2012-10-17" Statement: - - Effect: Allow - Sid: "DynamoDB" - Action: - - dynamodb:PutItem - - dynamodb:GetItem - - dynamodb:DeleteItem - - dynamodb:DescribeTable - Resource: - - !Sub "arn:${AWS::Partition}:dynamodb:*:${AWS::AccountId}:table/adf-tflocktable*" - - Effect: Allow - Sid: "S3" - Action: - - s3:Get* - - s3:GetBucketPolicy - - s3:List* - - s3:PutObject - Resource: - - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket} - - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* - Effect: Allow Sid: "S3ReadOnly" Action: - - s3:Get* + - s3:GetObject - s3:GetBucketPolicy - - s3:List* + - s3:ListBucket Resource: - !Sub arn:${AWS::Partition}:s3:::${SharedModulesBucket} - !Sub arn:${AWS::Partition}:s3:::${SharedModulesBucket}/* - !Sub arn:${AWS::Partition}:s3:::${PipelineManagementApplication.Outputs.DefinitionBucket} - !Sub arn:${AWS::Partition}:s3:::${PipelineManagementApplication.Outputs.DefinitionBucket}/* - - Effect: Allow - Sid: "KMS" - Action: - - kms:Decrypt - - kms:DescribeKey - - kms:Encrypt - - kms:GenerateDataKey* - - kms:ReEncrypt* - Resource: !GetAtt KMSKey.Arn - Effect: Allow Action: - "organizations:DescribeOrganization" @@ -320,7 +319,10 @@ Resources: Action: - "sts:AssumeRole" Resource: - - "*" + - !Sub arn:${AWS::Partition}:iam::*:role/adf-readonly-automation-role + Condition: + StringEquals: + aws:PrincipalOrgID: !Ref OrganizationId - Effect: Allow Action: - "ssm:GetParameter" @@ -377,10 +379,78 @@ Resources: Roles: - !Ref CodeBuildRole + CodeBuildDeployBucketRolePolicyS3: + Type: AWS::IAM::Policy + Properties: + PolicyName: "adf-codebuild-role-policy-s3" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Sid: "S3" + Action: + - s3:GetObject + - s3:GetBucketPolicy + - s3:ListBucket + - s3:PutObject + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket} + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* + Roles: + - !Ref CodeBuildRole + + CodeBuildDeployBucketRolePolicyKMS: + Type: AWS::IAM::Policy + Properties: + PolicyName: "adf-codebuild-role-policy-kms" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Sid: "KMS" + Action: + - kms:Decrypt + - kms:DescribeKey + - kms:Encrypt + - kms:GenerateDataKey + - kms:ReEncryptFrom + - kms:ReEncryptTo + Resource: !GetAtt KMSKey.Arn + Roles: + - !Ref CodeBuildRole + + CodeBuildTerraformAssumeRolePolicy: + Condition: ADFTerraformExtensionEnabled + Type: AWS::IAM::Policy + Properties: + PolicyName: "adf-codebuild-tf-policy" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Sid: "DynamoDB" + Action: + - dynamodb:PutItem + - dynamodb:GetItem + - dynamodb:DeleteItem + - dynamodb:DescribeTable + Resource: + - !Sub "arn:${AWS::Partition}:dynamodb:*:${AWS::AccountId}:table/adf-tflocktable*" + - Effect: Allow + Action: + - "sts:AssumeRole" + Resource: + - !Sub arn:${AWS::Partition}:iam::*:role/adf-terraform-role + Condition: + StringEquals: + aws:PrincipalOrgID: !Ref OrganizationId + Roles: + - !Ref CodeBuildRole + PipelineGenerationProvisionerCodeBuildRole: Type: AWS::IAM::Role Properties: - Path: "/adf-automation/" + Path: /adf/bootstrap/ AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -390,6 +460,9 @@ Resources: - codebuild.amazonaws.com Action: - sts:AssumeRole + Condition: + ArnEquals: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:codebuild:${AWS::Region}:${AWS::AccountId}:project/aws-deployment-framework-base" PipelineGenerationProvisionerCodeBuildRolePolicy: Type: AWS::IAM::Policy @@ -401,9 +474,10 @@ Resources: - Effect: Allow Sid: "S3" Action: - - s3:Get* - s3:GetBucketPolicy - - s3:List* + - s3:GetObject + - s3:GetObjectAttributes + - s3:ListBucket - s3:PutObject - s3:DeleteObject - s3:DeleteObjectVersion @@ -413,12 +487,18 @@ Resources: - Effect: Allow Sid: "S3ReadOnly" Action: - - s3:Get* + - s3:GetObject - s3:GetBucketPolicy - - s3:List* + - s3:ListBucket Resource: - !Sub arn:${AWS::Partition}:s3:::${SharedModulesBucket} - !Sub arn:${AWS::Partition}:s3:::${SharedModulesBucket}/* + - Effect: Allow + Sid: "PipelineAssets" + Action: + - s3:PutObject + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/adf-build/templates/* - Effect: Allow Sid: "KMS" Action: @@ -426,7 +506,8 @@ Resources: - kms:DescribeKey - kms:Encrypt - kms:GenerateDataKey* - - kms:ReEncrypt* + - kms:ReEncryptFrom + - kms:ReEncryptTo Resource: !GetAtt KMSKey.Arn - Effect: Allow Action: @@ -435,6 +516,12 @@ Resources: - "logs:PutLogEvents" Resource: - !Sub arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/* + - Effect: Allow + Sid: "KickOffDeletion" + Action: + - "states:StartExecution" + Resource: + - !Sub arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:adf-pipeline-management-delete-outdated - Effect: Allow Sid: "DescripePipelineTrigger" Action: @@ -447,6 +534,7 @@ Resources: CloudFormationRole: Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-cloudformation-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -459,11 +547,12 @@ Resources: - Effect: Allow Principal: Service: - - cloudformation.amazonaws.com - codepipeline.amazonaws.com Action: - sts:AssumeRole - Path: / + Condition: + StringEqualsIfExists: + "aws:SourceAccount": !Ref AWS::AccountId CloudFormationPolicy: Type: AWS::IAM::Policy @@ -475,12 +564,40 @@ Resources: - Effect: Allow Sid: "CloudFormation" Action: - - cloudformation:* - - iam:PassRole - - s3:Get* - - s3:List* - - s3:Put* + - cloudformation:ValidateTemplate + - cloudformation:CreateStack + - cloudformation:DeleteStack + - cloudformation:DescribeStackEvents + - cloudformation:DescribeStacks + - cloudformation:UpdateStack + - cloudformation:CreateChangeSet + - cloudformation:DeleteChangeSet + - cloudformation:DescribeChangeSet + - cloudformation:ExecuteChangeSet + - cloudformation:SetStackPolicy + - cloudformation:ValidateTemplate Resource: "*" + - Effect: Allow + Sid: "CloudFormationPassRole" + Action: + - iam:PassRole + Resource: + - !Sub arn:${AWS::Partition}:iam::*:role/adf-cloudformation-deployment-role + Condition: + StringEquals: + aws:PrincipalOrgID: !Ref OrganizationId + StringEqualsIfExists: + "iam:PassedToService": + - "cloudformation.amazonaws.com" + - Effect: Allow + Sid: "CloudFormationGetAssets" + Action: + - s3:GetObject + - s3:ListBucket + - s3:PutObject + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket} + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* Roles: - !Ref CloudFormationRole CloudFormationPolicyKMS: @@ -497,7 +614,8 @@ Resources: - kms:DescribeKey - kms:Encrypt - kms:GenerateDataKey* - - kms:ReEncrypt* + - kms:ReEncryptFrom + - kms:ReEncryptTo Resource: !GetAtt KMSKey.Arn Roles: - !Ref CloudFormationRole @@ -505,6 +623,7 @@ Resources: CloudFormationDeploymentRole: Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-cloudformation-deployment-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -515,20 +634,18 @@ Resources: - cloudformation.amazonaws.com Action: - sts:AssumeRole - - Effect: Allow - Principal: - AWS: !GetAtt CodeBuildRole.Arn - Action: - - sts:AssumeRole - Path: / + Condition: + StringEqualsIfExists: + "aws:SourceAccount": !Ref AWS::AccountId AdfAutomationRole: # This role is used by CodeBuild on the Deployment Account when # creating new CodePipeline Pipelines. - # This role is not assumed # by CodeBuild in any other pipeline + # This role is not assumed by CodeBuild in any other pipeline # other than 'aws-deployment-framework-pipelines' Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-automation-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -544,7 +661,6 @@ Resources: AWS: !Sub arn:${AWS::Partition}:iam::${AWS::AccountId}:root Action: - sts:AssumeRole - Path: / CloudFormationDeploymentPolicy: Type: AWS::IAM::Policy @@ -560,7 +676,8 @@ Resources: - "kms:DescribeKey" - "kms:Encrypt" - "kms:GenerateDataKey*" - - "kms:ReEncrypt*" + - "kms:ReEncryptFrom" + - "kms:ReEncryptTo" Resource: !GetAtt KMSKey.Arn Roles: - !Ref CloudFormationDeploymentRole @@ -575,8 +692,7 @@ Resources: - Effect: Allow Sid: "S3" Action: - - s3:Get* - - s3:List* + - s3:GetObject Resource: - !Sub "arn:${AWS::Partition}:s3:::${PipelineBucket}/adf-build/templates/*" - Effect: Allow @@ -627,9 +743,16 @@ Resources: - "ssm:GetParameters" - "ssm:GetParameter" Resource: - - !Sub "arn:${AWS::Partition}:ssm:*:*:parameter/adf/bucket_name" - - !Sub "arn:${AWS::Partition}:ssm:*:*:parameter/adf/deployment_account_id" - - !Sub "arn:${AWS::Partition}:ssm:*:*:parameter/adf/kms_arn" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/bucket_name" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/deployment_account_id" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/kms_arn" + - Effect: Allow + Sid: "KMS" + Action: + # These are required for cross account deployments via CodePipeline. + - "kms:Decrypt" + - "kms:DescribeKey" + Resource: !GetAtt KMSKey.Arn Roles: - !Ref AdfAutomationRole @@ -662,7 +785,11 @@ Resources: Value: !Ref AWS::AccountId - Name: SHARED_MODULES_BUCKET Value: !Ref SharedModulesBucket - - Name: ADF_PIPELINES_BUCKET + - Name: ADF_PIPELINE_ASSET_BUCKET + Value: !Ref PipelineBucket + - Name: ADF_PIPELINE_ASSET_KMS_ARN + Value: !GetAtt KMSKey.Arn + - Name: ADF_PIPELINES_MANAGEMENT_BUCKET Value: !GetAtt PipelineManagementApplication.Outputs.Bucket - Name: ADF_LOG_LEVEL Value: INFO @@ -678,9 +805,9 @@ Resources: install: runtime-versions: python: 3.12 - nodejs: 20 commands: - - aws s3 cp s3://$SHARED_MODULES_BUCKET/adf-build/ ./adf-build/ --recursive --quiet + - aws s3 cp s3://$SHARED_MODULES_BUCKET/adf-build/ ./adf-build/ --recursive --only-show-errors + - aws s3 cp --sse aws:kms --sse-kms-key-id $ADF_PIPELINE_ASSET_KMS_ARN ./adf-build/templates/ s3://$ADF_PIPELINE_ASSET_BUCKET/adf-build/templates/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -r adf-build/helpers/requirements.txt -q -t ./adf-build pre_build: commands: @@ -688,25 +815,100 @@ Resources: build: commands: - python adf-build/helpers/describe_codepipeline_trigger.py --should-match StartPipelineExecution aws-deployment-framework-pipelines ${!CODEPIPELINE_EXECUTION_ID} && EXTRA_OPTS="--force" || EXTRA_OPTS="" - - python adf-build/helpers/sync_to_s3.py ${!EXTRA_OPTS} --delete --metadata adf_version=${!ADF_VERSION} --upload-with-metadata execution_id=${!CODEPIPELINE_EXECUTION_ID} deployment_map.yml s3://$ADF_PIPELINES_BUCKET/deployment_map.yml - - python adf-build/helpers/sync_to_s3.py ${!EXTRA_OPTS} --delete --extension .yml --extension .yaml --metadata adf_version=${!ADF_VERSION} --upload-with-metadata execution_id=${!CODEPIPELINE_EXECUTION_ID} --recursive deployment_maps s3://$ADF_PIPELINES_BUCKET/deployment_maps + - python adf-build/helpers/sync_to_s3.py ${!EXTRA_OPTS} --delete --metadata adf_version=${!ADF_VERSION} --upload-with-metadata execution_id=${!CODEPIPELINE_EXECUTION_ID} deployment_map.yml s3://$ADF_PIPELINES_MANAGEMENT_BUCKET/deployment_map.yml + - python adf-build/helpers/sync_to_s3.py ${!EXTRA_OPTS} --delete --extension .yml --extension .yaml --metadata adf_version=${!ADF_VERSION} --upload-with-metadata execution_id=${!CODEPIPELINE_EXECUTION_ID} --recursive deployment_maps s3://$ADF_PIPELINES_MANAGEMENT_BUCKET/deployment_maps post_build: commands: - - echo "Pipelines are updated in the AWS Step Functions ADFPipelineManagementStateMachine." + - echo "Kick-off deletion of outdated pipelines:" + - aws stepfunctions start-execution --state-machine-arn "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:adf-pipeline-management-delete-outdated" + - echo "" + - echo "Pipelines are updated in the AWS Step Functions adf-pipeline-management." - echo "Please track their progress via:" - - echo "https://${AWS::Region}.console.aws.amazon.com/states/home?region=${AWS::Region}#/statemachines/view/arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:ADFPipelineManagementStateMachine" + - echo "https://${AWS::Region}.console.aws.amazon.com/states/home?region=${AWS::Region}#/statemachines/view/arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:adf-pipeline-management" ServiceRole: !GetAtt PipelineGenerationProvisionerCodeBuildRole.Arn Tags: - Key: "Name" Value: "aws-deployment-framework-base" + PipelineManagementCodePipelineRole: + Type: AWS::IAM::Role + Properties: + Path: /adf/pipeline-management/ + RoleName: "adf-pipeline-management-codepipeline" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Principal: + Service: + - codepipeline.amazonaws.com + Action: + - sts:AssumeRole + Condition: + StringEqualsIfExists: + "aws:SourceAccount": !Ref AWS::AccountId + + PipelineManagementCodePipelinePolicy: + Type: AWS::IAM::Policy + DependsOn: PipelineBucketPolicy + Properties: + PolicyName: "adf-pipeline-management-policy" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Sid: "CodeBuild" + Action: + - codebuild:BatchGetBuilds + - codebuild:StartBuild + Resource: + - !GetAtt CodeBuildProject.Arn + - Effect: Allow + Sid: "CodePipelineAssets" + Action: + - s3:GetObjectVersion + - s3:GetObjectVersionAcl + - s3:GetObjectVersionTagging + - s3:GetReplicationConfiguration + - s3:ListBucket + - s3:PutObject + - s3:ReplicateDelete + - s3:ReplicateObject + - s3:ReplicateTags + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket} + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* + - Effect: Allow + Sid: "CodeCommit" + Action: + - codecommit:GetBranch + - codecommit:GetCommit + - codecommit:UploadArchive + - codecommit:GetUploadArchiveStatus + - codecommit:CancelUploadArchive + Resource: + - !GetAtt CodeCommitRepository.Arn + - Effect: Allow + Action: + - kms:Decrypt + - kms:Encrypt + - kms:GenerateDataKey + - kms:ReEncryptFrom + - kms:ReEncryptTo + Resource: !GetAtt KMSKey.Arn + Roles: + - !Ref PipelineManagementCodePipelineRole + CodePipeline: Type: AWS::CodePipeline::Pipeline Properties: ArtifactStore: + EncryptionKey: + Id: !GetAtt KMSKey.Arn + Type: KMS Type: S3 Location: !Ref PipelineBucket - RoleArn: !GetAtt CodePipelineRole.Arn + RoleArn: !GetAtt PipelineManagementCodePipelineRole.Arn RestartExecutionOnUpdate: true Name: "aws-deployment-framework-pipelines" Stages: @@ -791,20 +993,34 @@ Resources: Id: !Sub "${AWS::StackName}" Version: "2012-10-17" Statement: - - Effect: Allow + - Sid: "AllowCodeCommitAndEvents" + Effect: Allow Principal: Service: - codecommit.amazonaws.com - events.amazonaws.com + Action: sns:Publish + Resource: "*" + Condition: + StringEquals: + "aws:SourceAccount": !Ref AWS::AccountId + - Sid: "AllowStateMachine" + Effect: Allow + Principal: + Service: - states.amazonaws.com Action: sns:Publish Resource: "*" + Condition: + ArnLike: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:*" Topics: - !Ref PipelineSNSTopic CodePipelineRole: Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-codepipeline-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -812,17 +1028,12 @@ Resources: - Effect: Allow Principal: Service: - - cloudformation.amazonaws.com - - codedeploy.amazonaws.com - codepipeline.amazonaws.com - - s3.amazonaws.com - Action: - - sts:AssumeRole - - Effect: Allow - Principal: - AWS: !Ref AWS::AccountId Action: - sts:AssumeRole + Condition: + StringEqualsIfExists: + "aws:SourceAccount": !Ref AWS::AccountId CodePipelineRolePolicy: # See https://docs.aws.amazon.com/codepipeline/latest/userguide/how-to-custom-role.html#how-to-update-role-new-services @@ -848,8 +1059,6 @@ Resources: - cloudformation:ValidateTemplate - codebuild:BatchGetBuilds - codebuild:StartBuild - - codebuild:BatchGetBuilds - - codebuild:StartBuild - ecr:DescribeImages - ecs:DescribeServices - ecs:DescribeTaskDefinition @@ -864,14 +1073,6 @@ Resources: - codedeploy:RegisterApplicationRevision - lambda:InvokeFunction - lambda:ListFunctions - - s3:GetObjectVersion - - s3:GetObjectVersionAcl - - s3:GetObjectVersionTagging - - s3:GetReplicationConfiguration - - s3:ListBucket - - s3:ReplicateDelete - - s3:ReplicateObject - - s3:ReplicateTags - servicecatalog:CreateProvisioningArtifact - servicecatalog:DeleteProvisioningArtifact - servicecatalog:DescribeProvisioningArtifact @@ -890,18 +1091,6 @@ Resources: - codecommit:CancelUploadArchive Resource: - "*" - - Effect: Allow - Sid: "PassRole" - Action: - - "iam:PassRole" - Resource: "*" - Condition: - StringEqualsIfExists: - "iam:PassedToService": - - cloudformation.amazonaws.com - - elasticbeanstalk.amazonaws.com - - ec2.amazonaws.com - - ecs-tasks.amazonaws.com - Effect: Allow Sid: "AllowCodeConnections" Action: @@ -986,14 +1175,69 @@ Resources: PolicyDocument: Statement: - Action: - - "s3:Get*" - - "s3:List*" - - "s3:PutObject*" - - "s3:PutReplicationConfiguration" + - "s3:GetObject" + Effect: Allow + Condition: + StringEquals: + aws:PrincipalOrgID: !Ref OrganizationId + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* + Principal: + AWS: "*" + - Sid: "AllowCodeCommitFromOrgSources" + Action: + - "s3:PutObject" Effect: Allow Condition: StringEquals: aws:PrincipalOrgID: !Ref OrganizationId + ArnLike: + aws:PrincipalArn: 'arn:aws:iam::*:role/adf-codecommit-role' + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* + Principal: + AWS: "*" + - Sid: "DenyUnencryptedObjects" + Action: + - "s3:PutObject" + Effect: Deny + Condition: + StringNotEquals: + "s3:x-amz-server-side-encryption": "aws:kms" + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* + Principal: + AWS: "*" + - Sid: "DenyDifferentKMSKey" + Action: + - "s3:PutObject" + Effect: Deny + Condition: + ArnNotEqualsIfExists: + "s3:x-amz-server-side-encryption-aws-kms-key-id": !GetAtt KMSKey.Arn + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* + Principal: + AWS: "*" + - Sid: "DenyInsecureConnections" + Action: + - "s3:*" + Effect: Deny + Condition: + Bool: + aws:SecureTransport: "false" + Resource: + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket} + - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* + Principal: + AWS: "*" + - Sid: "DenyInsecureTLS" + Action: + - "s3:*" + Effect: Deny + Condition: + NumericLessThan: + "s3:TlsVersion": "1.2" Resource: - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket} - !Sub arn:${AWS::Partition}:s3:::${PipelineBucket}/* @@ -1058,7 +1302,8 @@ Resources: SendSlackNotificationLambdaRole: Type: "AWS::IAM::Role" Properties: - RoleName: "adf-send-slack-notification-lambda-role" + Path: /adf/bootstrap/ + RoleName: "adf-pipeline-send-slack-notification-lambda" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -1068,7 +1313,6 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" - Path: "/" Policies: - PolicyName: "adf-send-slack-notification" PolicyDocument: @@ -1089,7 +1333,8 @@ Resources: CheckPipelineStatusLambdaRole: Type: "AWS::IAM::Role" Properties: - RoleName: "adf-check-pipeline-status-lambda-role" + Path: /adf/bootstrap/ + RoleName: "adf-pipeline-check-pipeline-status-lambda" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -1099,7 +1344,6 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" - Path: "/" Policies: - PolicyName: "adf-check-pipeline-status" PolicyDocument: @@ -1115,7 +1359,8 @@ Resources: EnableCrossAccountAccessLambdaRole: Type: "AWS::IAM::Role" Properties: - RoleName: "adf-enable-cross-account-access-lambda-role" + Path: /adf/bootstrap/ + RoleName: "adf-bootstrap-pipeline-enable-cross-account-access-role" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -1125,7 +1370,6 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" - Path: "/" Policies: - PolicyName: "adf-enable-cross-account-access" PolicyDocument: @@ -1134,7 +1378,7 @@ Resources: - Effect: "Allow" Action: "sts:AssumeRole" Resource: - - !Sub "arn:${AWS::Partition}:iam::*:role/adf-update-cross-account-access-role" + - !Sub "arn:${AWS::Partition}:iam::*:role/adf/bootstrap/adf-update-cross-account-access" Condition: StringEquals: aws:PrincipalOrgID: !Ref OrganizationId @@ -1155,6 +1399,7 @@ Resources: - "iam:GetRolePolicy" - "iam:PutRolePolicy" Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codebuild-role" - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codepipeline-role" - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-deployment-role" - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role" @@ -1182,10 +1427,11 @@ Resources: - !Ref CheckPipelineStatusLambdaRole - !Ref EnableCrossAccountAccessLambdaRole - StatesExecutionRole: + EnableCrossAccountAccessStatesExecutionRole: Type: "AWS::IAM::Role" Properties: - RoleName: "adf-state-machine-role" + Path: "/adf/bootstrap/" + RoleName: "adf-bootstrap-enable-cross-account-state-machine" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -1194,19 +1440,25 @@ Resources: Service: - states.amazonaws.com Action: "sts:AssumeRole" - Path: "/" + Condition: + ArnLike: + "aws:SourceArn": + - !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:*" Policies: - - PolicyName: "adf-state-machine-role" + - PolicyName: "adf-state-machine-invoke" PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - "lambda:InvokeFunction" - - "sns:Publish" Resource: - !GetAtt EnableCrossAccountAccess.Arn - !GetAtt CheckPipelineStatus.Arn + - Effect: Allow + Action: + - "sns:Publish" + Resource: - !GetAtt PipelineSNSTopic.TopicArn LambdaInvokePermission: @@ -1216,12 +1468,13 @@ Resources: Principal: sns.amazonaws.com SourceArn: !Ref PipelineSNSTopic FunctionName: !Ref SendSlackNotification + SourceAccount: !Ref AWS::AccountId - StateMachine: + EnableCrossAccountAccessStateMachine: Type: "AWS::StepFunctions::StateMachine" Properties: - StateMachineName: "EnableCrossAccountAccess" - RoleArn: !GetAtt StatesExecutionRole.Arn + StateMachineName: "adf-bootstrap-enable-cross-account" + RoleArn: !GetAtt EnableCrossAccountAccessStatesExecutionRole.Arn TracingConfiguration: Enabled: true DefinitionString: !Sub |- @@ -1427,6 +1680,7 @@ Resources: PipelineCloudWatchEventRole: Type: AWS::IAM::Role Properties: + Path: /adf/bootstrap/ AssumeRolePolicyDocument: Version: 2012-10-17 Statement: @@ -1435,7 +1689,9 @@ Resources: Service: - events.amazonaws.com Action: sts:AssumeRole - Path: / + Condition: + ArnLike: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:events:${AWS::Region}:${AWS::AccountId}:rule/*" Policies: - PolicyName: adf-pipelines-execute-cwe PolicyDocument: @@ -1483,6 +1739,318 @@ Resources: BillingMode: PAY_PER_REQUEST TableName: adf-tflocktable + BootstrapTestRole: + # This role is used to test whether the AWS Account is bootstrapped or not. + # Do not attach any policies to this role. + Type: AWS::IAM::Role + Properties: + Path: /adf/bootstrap/ + RoleName: "adf-bootstrap-test-role" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Condition: + ArnEquals: + "aws:PrincipalArn": !Sub "arn:${AWS::Partition}:iam::${ManagementAccountId}:role/adf/account-bootstrapping/jump-manager/adf-bootstrapping-jump-manager-role" + Principal: + AWS: !Sub "arn:${AWS::Partition}:iam::${ManagementAccountId}:root" + Action: + - sts:AssumeRole + Policies: + - PolicyName: "lock-down-for-assumerole-test-only" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Deny + Action: "*" + Resource: "*" + + BootstrapUpdateDeploymentRole: + # This role is used to update the bootstrap stacks in the deployment + # account. + Type: AWS::IAM::Role + Properties: + Path: /adf/bootstrap/ + RoleName: "adf-bootstrap-update-deployment-role" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Condition: + ArnEquals: + "aws:PrincipalArn": !Sub "arn:${AWS::Partition}:iam::${ManagementAccountId}:role/adf/account-bootstrapping/jump/adf-bootstrapping-cross-account-jump-role" + Principal: + AWS: !Sub "arn:${AWS::Partition}:iam::${ManagementAccountId}:root" + Action: + - sts:AssumeRole + Policies: + - PolicyName: "limited-update-permissions-only" + PolicyDocument: + # Please note, that some of the resources are intentionally + # left out of scope for the update deployment role. + # The idea is to update the most common parts of ADF only. + # + # If it gets refactored, or some privileged resources need to + # get an update, the privileged cross-account access role should + # be used to update ADF instead. + # + # --- + # + # Resources that can only be updated via the permissive roles: + # IAM Roles: + # - /adf/bootstrap/adf-bootstrap-pipeline-enable-cross-account-access-role + # - /adf/bootstrap/adf-bootstrap-enable-cross-account-state-machine + # - /adf/bootstrap/* (roles that do not have a Name set) + # - /adf/bootstrap/adf-bootstrap-test-role + # - /adf/bootstrap/adf-bootstrap-update-deployment-role + # + # KMS: + # - !Ref KMSKey + # + # KMSAlias: + # - !Sub "alias/codepipeline-${AWS::AccountId}" + # + # S3 Buckets: + # - !Ref PipelineBucket + # + # CodeCommit Repositories: + # - aws-deployment-framework-pipelines + # + # CodePipeline: + # - aws-deployment-framework-pipelines + # + # SNS Topics: + # - !Ref PipelineSNSTopic + # + # SNS Topic Policies: + # - !Ref PipelineSNSTopicPolicy + # + # Event Rules: + # - !Ref PipelineEventRule + # - !Ref PipelineCloudWatchEventRule + # + # Step Function State Machines: + # - adf-bootstrap-enable-cross-account + # + # DynamoDB Tables: + # - adf-tflocktable + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Action: + - "lambda:InvokeAsync" + - "lambda:InvokeFunction" + Resource: + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:ADFPipelinesDetermineDefaultBranchName" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:ADFPipelinesDetermineDefaultBranchName:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:PipelinesCreateInitialCommitFunction" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:PipelinesCreateInitialCommitFunction:*" + - Effect: "Allow" + Action: + - "lambda:GetFunction" + - "lambda:GetFunctionConfiguration" + - "lambda:GetFunctionEventInvokeConfig" + - "lambda:GetRuntimeManagementConfig" + - "lambda:ListFunctionEventInvokeConfigs" + - "lambda:ListTags" + - "lambda:ListVersionsByFunction" + - "lambda:PublishVersion" + - "lambda:PutFunctionConcurrency" + - "lambda:PutFunctionEventInvokeConfig" + - "lambda:PutRuntimeManagementConfig" + - "lambda:UpdateFunctionCode" + - "lambda:UpdateFunctionConfiguration" + Resource: + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:ADFPipelinesDetermineDefaultBranchName" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:ADFPipelinesDetermineDefaultBranchName:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:CheckPipelineStatus" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:CheckPipelineStatus:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:PipelinesCreateInitialCommitFunction" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:PipelinesCreateInitialCommitFunction:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:SendSlackNotification" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:SendSlackNotification:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:UpdateCrossAccountIAM" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:UpdateCrossAccountIAM:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-create-repository" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-create-repository:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-create-update-rule" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-create-update-rule:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-deployment-map-processor" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-deployment-map-processor:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-generate-pipeline-inputs" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-generate-pipeline-inputs:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-identify-out-of-date-pipelines" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-identify-out-of-date-pipelines:*" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-store-pipeline-definition" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:adf-pipeline-management-store-pipeline-definition:*" + - Effect: "Allow" + Action: + - "lambda:DeleteLayerVersion" + - "lambda:GetLayerVersion" + - "lambda:PublishLayerVersion" + Resource: + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:layer:adf_shared_layer" + - !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:layer:adf_shared_layer:*" + - Sid: "CodeBuildUpdate" + Effect: "Allow" + Action: + - "codebuild:UpdateProject" + Resource: + - !GetAtt CodeBuildProject.Arn + - Effect: "Allow" + Action: + - "cloudformation:CancelUpdateStack" + - "cloudformation:ContinueUpdateRollback" + - "cloudformation:DeleteChangeSet" + - "cloudformation:DeleteStack" + - "cloudformation:DescribeChangeSet" + - "cloudformation:DescribeStacks" + - "cloudformation:SetStackPolicy" + - "cloudformation:SignalResource" + - "cloudformation:UpdateTerminationProtection" + Resource: + # Across all regions, as it needs to be able to find and + # cleanup global stacks in non-global regions: + - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/adf-global-base-*/*" + - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/adf-regional-base-*/*" + - Sid: "PreventDeletingBootstrapStack" + Effect: "Deny" + Action: + - "cloudformation:DeleteStack" + - "cloudformation:UpdateTerminationProtection" + Resource: + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-deployment-*" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-deployment" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-deployment/*" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-iam" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-iam/*" + - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/adf-regional-base-deployment" + - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/adf-regional-base-deployment/*" + - Effect: "Allow" + Action: + - "cloudformation:CreateChangeSet" + - "cloudformation:CreateStack" + - "cloudformation:CreateUploadBucket" + - "cloudformation:ExecuteChangeSet" + - "cloudformation:TagResource" + - "cloudformation:UntagResource" + - "cloudformation:UpdateStack" + Resource: + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:aws:transform/Serverless-2016-10-31" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-deployment-*/*" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-deployment/*" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-iam/*" + - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/adf-regional-base-deployment/*" + - Effect: "Allow" + Action: + - "cloudformation:ListStacks" + - "cloudformation:ValidateTemplate" + - "codecommit:ListRepositories" + - "ec2:DeleteInternetGateway" + - "ec2:DeleteNetworkInterface" + - "ec2:DeleteRouteTable" + - "ec2:DeleteSubnet" + - "ec2:DeleteVpc" + - "ec2:DescribeInternetGateways" + - "ec2:DescribeNetworkInterfaces" + - "ec2:DescribeRegions" + - "ec2:DescribeRouteTables" + - "ec2:DescribeSubnets" + - "ec2:DescribeVpcs" + - "iam:CreateAccountAlias" + - "iam:DeleteAccountAlias" + - "iam:ListAccountAliases" + Resource: + - "*" + - Effect: "Allow" + Action: + - "ssm:GetParameters" + - "ssm:GetParameter" + - "ssm:PutParameter" + Resource: + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/*" + - Sid: "IAMFullPathOnlyCreateDelete" + Effect: "Allow" + Action: + - "iam:CreateRole" + - "iam:DeleteRole" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-terraform-role" + - Sid: "IAMFullPathOnlyTag" + Effect: "Allow" + Action: + - "iam:TagRole" + - "iam:UntagRole" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codebuild-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codecommit-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codepipeline-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-readonly-automation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-terraform-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/pipeline-management/adf-pipeline-management-codepipeline" + - Sid: "IAMFullPathAndNameOnly" + Effect: "Allow" + Action: + - "iam:DeleteRolePolicy" + - "iam:GetRole" + - "iam:GetRolePolicy" + - "iam:PutRolePolicy" + - "iam:UpdateAssumeRolePolicy" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codebuild-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codecommit-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codepipeline-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-pipeline-check-pipeline-status-lambda" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-pipeline-management-codepipeline" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-pipeline-send-slack-notification-lambda" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-readonly-automation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-terraform-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap/adf-pipeline-check-pipeline-status-lambda" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap/adf-pipeline-send-slack-notification-lambda" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/pipeline-management/adf-pipeline-management-codepipeline" + - Sid: "IAMGetOnly" + Effect: "Allow" + Action: + - "iam:GetRole" + - "iam:GetRolePolicy" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-*" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/*" + - Effect: "Allow" + Action: + - "s3:GetObject" + Resource: + - !Sub "arn:${AWS::Partition}:s3:::${BootstrapTemplatesBucketName}/adf-bootstrap/*" + - !Sub "arn:${AWS::Partition}:s3:::${SharedModulesBucket}/adf-bootstrap/*" + - Effect: "Allow" + Action: + - "codecommit:GetRepository" + Resource: + - !GetAtt CodeCommitRepository.Arn + - Effect: "Allow" + Action: + - "codebuild:BatchGetProjects" + Resource: + - !GetAtt CodeBuildProject.Arn + - Effect: "Allow" + Action: + - "sns:GetTopicAttributes" + Resource: + - !Ref PipelineSNSTopic + - Effect: Allow + Sid: "KickOffPipelineManagement" + Action: + - "states:DescribeExecution" + - "states:StartExecution" + Resource: + - !Sub arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:adf-bootstrap-enable-cross-account + - !Sub arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:execution:adf-bootstrap-enable-cross-account:* + Outputs: ADFVersionNumber: Value: !Ref ADFVersion diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/determine_default_branch/handler.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/determine_default_branch/handler.py index 1f5001733..a1940833d 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/determine_default_branch/handler.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/determine_default_branch/handler.py @@ -29,6 +29,8 @@ def lambda_handler(event, _context, prior_error=err): "StackId": event["StackId"], "Reason": str(prior_error), } + if not event["ResponseURL"].lower().startswith('http'): + raise ValueError('ResponseURL is forbidden') from None with urlopen( Request( event["ResponseURL"], diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/enable_cross_account_access.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/enable_cross_account_access.py index b44cca42b..0cbda6b84 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/enable_cross_account_access.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/enable_cross_account_access.py @@ -40,8 +40,11 @@ # Role Policies are updated in the deployment account. DEPLOYMENT_ROLE_POLICIES = { + "adf-codebuild-role": [ + "adf-codebuild-role-policy-s3", + "adf-codebuild-role-policy-kms", + ], "adf-codepipeline-role": [ - "adf-codepipeline-role-policy", "adf-codepipeline-role-policy-s3", "adf-codepipeline-role-policy-kms", ], @@ -63,7 +66,7 @@ def _assume_role_if_required(account_id: str): try: role_arn_to_assume = ( f'arn:{partition}:iam::{account_id}:' - f'role/adf-update-cross-account-access-role' + f'role/adf/bootstrap/adf-update-cross-account-access' ) target_role = sts.assume_cross_account_role( role_arn_to_assume, diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/iam_cfn_deploy_role_policy.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/iam_cfn_deploy_role_policy.py index 464f3a750..c5c2f3a2c 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/iam_cfn_deploy_role_policy.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/iam_cfn_deploy_role_policy.py @@ -47,7 +47,7 @@ def __init__(self, client, role_name, policy_name): def _get_statement(self, statement_id): s3_statements = list(filter( - lambda stmt: stmt['Sid'] == statement_id, + lambda stmt: stmt.get('Sid') == statement_id, self.policy_document.get('Statement', {}) )) if len(s3_statements) == 1: diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/initial_commit/handler.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/initial_commit/handler.py index 531b27467..04e09f349 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/initial_commit/handler.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/initial_commit/handler.py @@ -29,6 +29,8 @@ def lambda_handler(event, _context, prior_error=err): "StackId": event["StackId"], "Reason": str(prior_error), } + if not event["ResponseURL"].lower().startswith('http'): + raise ValueError('ResponseURL is forbidden') from None with urlopen( Request( event["ResponseURL"], diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/generate_pipeline_inputs.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/generate_pipeline_inputs.py index 7f25c79b1..847c48fab 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/generate_pipeline_inputs.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/generate_pipeline_inputs.py @@ -21,7 +21,9 @@ LOGGER = configure_logger(__name__) DEPLOYMENT_ACCOUNT_REGION = os.environ["AWS_REGION"] DEPLOYMENT_ACCOUNT_ID = os.environ["ACCOUNT_ID"] -ROOT_ACCOUNT_ID = os.environ["ROOT_ACCOUNT_ID"] +MANAGEMENT_ACCOUNT_ID = os.environ["MANAGEMENT_ACCOUNT_ID"] + +ORGANIZATIONS_READONLY_ROLE = "adf/organizations/adf-organizations-readonly" def store_regional_parameter_config( @@ -70,7 +72,7 @@ def fetch_required_ssm_params(pipeline_input, regions): } if region == DEPLOYMENT_ACCOUNT_REGION: output[region]["modules"] = parameter_store.fetch_parameter( - "deployment_account_bucket" + "shared_modules_bucket" ) output["default_scm_branch"] = parameter_store.fetch_parameter( "scm/default_scm_branch", @@ -209,13 +211,10 @@ def lambda_handler(event, _): """ parameter_store = ParameterStore(DEPLOYMENT_ACCOUNT_REGION, boto3) sts = STS() - cross_account_role_name = parameter_store.fetch_parameter( - "cross_account_access_role", - ) role = sts.assume_cross_account_role( ( f"arn:{get_partition(DEPLOYMENT_ACCOUNT_REGION)}:iam::" - f"{ROOT_ACCOUNT_ID}:role/{cross_account_role_name}-readonly" + f"{MANAGEMENT_ACCOUNT_ID}:role/{ORGANIZATIONS_READONLY_ROLE}" ), "pipeline", ) diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/identify_out_of_date_pipelines.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/identify_out_of_date_pipelines.py index e17a79005..af1140437 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/identify_out_of_date_pipelines.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/identify_out_of_date_pipelines.py @@ -152,7 +152,8 @@ def lambda_handler(event, _): output["pipelines_to_be_deleted"] = out_of_date_pipelines data_md5 = hashlib.md5( - json.dumps(output, sort_keys=True).encode("utf-8") + json.dumps(output, sort_keys=True).encode("utf-8"), + usedforsecurity=False, ).hexdigest() root_trace_id = os.getenv("_X_AMZN_TRACE_ID", "na=na;na=na").split(";")[0] output["traceroot"] = root_trace_id diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/process_deployment_map.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/process_deployment_map.py index 2f4014bdc..6e9ca39ee 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/process_deployment_map.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/process_deployment_map.py @@ -187,7 +187,10 @@ def start_executions( if len(sfn_execution_name) > 80: truncated_pipeline_name = full_pipeline_name[:60] name_bytes_to_hash = bytes(full_pipeline_name, 'utf-8') - execution_unique_hash = hashlib.md5(name_bytes_to_hash).hexdigest()[:5] + execution_unique_hash = hashlib.md5( + name_bytes_to_hash, + usedforsecurity=False, + ).hexdigest()[:5] sfn_execution_name = f"{truncated_pipeline_name}-{execution_unique_hash}-{run_id}"[:80] sfn_client.start_execution( stateMachineArn=PIPELINE_MANAGEMENT_STATEMACHINE, diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/templates/codecommit.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/templates/codecommit.yml deleted file mode 100644 index 6842f6968..000000000 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/templates/codecommit.yml +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright Amazon.com Inc. or its affiliates. -# SPDX-License-Identifier: Apache-2.0 - -Parameters: - RepoName: - Type: String - Description: - Type: String - Default: Created by ADF -Resources: - Repo: - Type: AWS::CodeCommit::Repository - DeletionPolicy: Retain - UpdateReplacePolicy: Retain - Properties: - RepositoryName: !Ref RepoName - RepositoryDescription: !Ref Description diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/templates/events.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/templates/events.yml deleted file mode 100644 index a03fde62b..000000000 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/pipeline_management/templates/events.yml +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright Amazon.com Inc. or its affiliates. -# SPDX-License-Identifier: Apache-2.0 - -Parameters: - DeploymentAccountId: - Type: "AWS::SSM::Parameter::Value" - Description: Deployment Account ID - Default: /adf/deployment_account_id - -Resources: - EventRole: - Type: AWS::IAM::Role - Properties: - AssumeRolePolicyDocument: - Version: 2012-10-17 - Statement: - - Effect: Allow - Principal: - Service: - - events.amazonaws.com - Action: sts:AssumeRole - Path: / - Policies: - - PolicyName: !Sub events-to-${DeploymentAccountId} - PolicyDocument: - Version: 2012-10-17 - Statement: - - Effect: Allow - Action: events:PutEvents - Resource: '*' - - EventRule: - Type: AWS::Events::Rule - Properties: - Name: !Sub adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId} - EventPattern: - source: - - aws.codecommit - detail-type: - - 'CodeCommit Repository State Change' - detail: - event: - - referenceCreated - - referenceUpdated - referenceType: - - branch - Targets: - - Arn: !Sub arn:${AWS::Partition}:events:${AWS::Region}:${DeploymentAccountId}:event-bus/default - RoleArn: !GetAtt EventRole.Arn - Id: codecommit-push-event diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/slack.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/slack.py index bb5563fb0..8523008d7 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/slack.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/slack.py @@ -165,6 +165,8 @@ def send_message(url, payload): Sends the message to the designated slack webhook """ params = json.dumps(payload).encode('utf8') + if not url.lower().startswith('http'): + raise ValueError('URL to send message to is forbidden') from None req = urllib.request.Request( url, data=params, diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/pipeline_management.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/pipeline_management.yml index 0cc3ea694..c523fe9f6 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/pipeline_management.yml +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/pipeline_management.yml @@ -18,15 +18,15 @@ Parameters: Type: String MinLength: "1" - CrossAccountAccessRole: + PipelineBucket: Type: String MinLength: "1" - PipelineBucket: + PipelineBucketKmsKeyArn: Type: String MinLength: "1" - RootAccountId: + ManagementAccountId: Type: String MinLength: "1" @@ -85,16 +85,12 @@ Resources: - Effect: "Allow" Action: "lambda:GetLayerVersion" Resource: !Ref LambdaLayer - Roles: - - !Ref DeploymentMapProcessingLambdaRole - - !Ref CreateOrUpdateRuleLambdaRole - - !Ref CreateRepositoryLambdaRole - - !Ref GeneratePipelineInputsLambdaRole - - !Ref PipelineManagementCodeBuildProjectRole - - !Ref StoreDefinitionLambdaRole - - !Ref IdentifyOutOfDatePipelinesLambdaRole DeploymentMapProcessingLambdaRolePolicy: + # Should remain a ManagedPolicy that is not waited for via DependsOn + # By the time this function is called, the policies are in place. + # Otherwise we have a circular dependency due to the Pipeline Management + # Bucket reference and event handler depending on each other. Type: "AWS::IAM::ManagedPolicy" Properties: Description: "Policy to allow the deployment map processing Lambda to perform actions" @@ -103,42 +99,20 @@ Resources: Statement: - Effect: "Allow" Action: "s3:ListBucket" - Resource: !GetAtt ADFPipelineBucket.Arn + Resource: !GetAtt PipelineManagementBucket.Arn - Effect: "Allow" Action: "states:StartExecution" - Resource: !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:ADFPipelineManagementStateMachine" + Resource: !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:adf-pipeline-management" - Effect: "Allow" Action: "s3:GetObject" - Resource: !Sub "${ADFPipelineBucket.Arn}/*" + Resource: !Sub "${PipelineManagementBucket.Arn}/*" Roles: - !Ref DeploymentMapProcessingLambdaRole - CrossAccountCloudFormationPolicy: - Type: "AWS::IAM::ManagedPolicy" - Properties: - Description: "Policy to allow a lambda to upload a template to s3 and validate a cloudformation template" - PolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow - Action: - - "s3:PutObject" - - "s3:GetObject" - Resource: - - !Sub "arn:${AWS::Partition}:s3:::${PipelineBucket}/*" - - Effect: Allow - Action: - - "cloudformation:ValidateTemplate" - Resource: - - "*" - Roles: - - !Ref CreateOrUpdateRuleLambdaRole - - !Ref CreateRepositoryLambdaRole - DeploymentMapProcessingLambdaRole: Type: "AWS::IAM::Role" Properties: - Path: "/adf-automation/" + Path: "/adf/pipeline-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -148,12 +122,14 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref ADFPipelineManagementLambdaBasePolicy CreateOrUpdateRuleLambdaRole: Type: "AWS::IAM::Role" Properties: - Path: "/adf-automation/" - RoleName: "adf-pipeline-create-update-rule" + Path: "/adf/pipeline-management/" + RoleName: "adf-pipeline-management-create-update-rule" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -163,6 +139,9 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref ADFPipelineManagementLambdaBasePolicy + - !Ref ADFAutomationRoleCrossAccountAccessRolePolicy Policies: - PolicyName: "adf-pipeline-create-update-rule-policy" PolicyDocument: @@ -174,13 +153,13 @@ Resources: - "ssm:GetParameters" - "ssm:GetParametersByPath" Resource: - - !Sub arn:${AWS::Partition}:ssm:*:*:parameter/adf/* + - !Sub arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/* CreateRepositoryLambdaRole: Type: "AWS::IAM::Role" Properties: - Path: "/adf-automation/" - RoleName: "adf-pipeline-create-repository" + Path: "/adf/pipeline-management/" + RoleName: "adf-pipeline-management-create-repository" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -190,6 +169,9 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref ADFPipelineManagementLambdaBasePolicy + - !Ref ADFAutomationRoleCrossAccountAccessRolePolicy Policies: - PolicyName: "adf-create-repo-function-policy" PolicyDocument: @@ -204,8 +186,8 @@ Resources: GeneratePipelineInputsLambdaRole: Type: "AWS::IAM::Role" Properties: - Path: "/adf-automation/" - RoleName: "adf-pipeline-provisioner-generate-inputs" + Path: "/adf/pipeline-management/" + RoleName: "adf-pipeline-management-generate-inputs" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -215,6 +197,8 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref ADFPipelineManagementLambdaBasePolicy Policies: - PolicyName: "adf-generate-pipeline-input-function-policy" PolicyDocument: @@ -224,7 +208,7 @@ Resources: Action: - "sts:AssumeRole" Resource: - - !Sub "arn:${AWS::Partition}:iam::${RootAccountId}:role/${CrossAccountAccessRole}-readonly" + - !Sub "arn:${AWS::Partition}:iam::${ManagementAccountId}:role/adf/organizations/adf-organizations-readonly" - Effect: Allow Action: - "ssm:GetParameter" @@ -241,7 +225,7 @@ Resources: StoreDefinitionLambdaRole: Type: "AWS::IAM::Role" Properties: - Path: "/adf-automation/" + Path: "/adf/pipeline-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -251,6 +235,8 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref ADFPipelineManagementLambdaBasePolicy Policies: - PolicyName: "adf-store-pipeline-definitions" PolicyDocument: @@ -260,12 +246,12 @@ Resources: Action: - "s3:PutObject" Resource: - - !Sub "${ADFDefinitionBucket.Arn}/*" + - !Sub "${PipelineDefinitionBucket.Arn}/*" IdentifyOutOfDatePipelinesLambdaRole: Type: "AWS::IAM::Role" Properties: - Path: "/adf-automation/" + Path: "/adf/pipeline-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -275,6 +261,8 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref ADFPipelineManagementLambdaBasePolicy Policies: - PolicyName: "adf-get-deployment-maps" PolicyDocument: @@ -285,8 +273,8 @@ Resources: - "s3:ListBucket" - "s3:GetObject" Resource: - - !Sub "${ADFPipelineBucket.Arn}/*" - - !Sub "${ADFPipelineBucket.Arn}" + - !Sub "${PipelineManagementBucket.Arn}/*" + - !Sub "${PipelineManagementBucket.Arn}" - Effect: Allow Action: - "ssm:GetParametersByPath" @@ -296,6 +284,7 @@ Resources: StateMachineExecutionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/pipeline-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -304,7 +293,10 @@ Resources: Service: - states.amazonaws.com Action: "sts:AssumeRole" - Path: "/" + Condition: + ArnLike: + "aws:SourceArn": + - !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:*" Policies: - PolicyName: "adf-state-machine-role-policy" PolicyDocument: @@ -325,17 +317,11 @@ Resources: - !GetAtt CreateRepositoryFunction.Arn - !GetAtt GeneratePipelineInputsFunction.Arn - !GetAtt StoreDefinitionFunction.Arn - - !GetAtt IdentifyOutOfDatePipelinesFunction.Arn - Effect: Allow Action: - "codebuild:StartBuild" Resource: - !GetAtt PipelineManagementCodeBuildProject.Arn - - Effect: Allow - Action: - - states:StartExecution - Resource: - - !Ref PipelineDeletionStateMachine - Effect: Allow Action: - events:PutTargets @@ -347,6 +333,7 @@ Resources: DeletionStateMachineExecutionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/pipeline-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -355,7 +342,9 @@ Resources: Service: - states.amazonaws.com Action: "sts:AssumeRole" - Path: "/" + Condition: + ArnLike: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:*" Policies: - PolicyName: "adf-state-machine-role-policy" PolicyDocument: @@ -367,6 +356,11 @@ Resources: - "xray:PutTraceSegments" - "cloudwatch:PutMetricData" Resource: "*" + - Effect: Allow + Action: + - "lambda:InvokeFunction" + Resource: + - !GetAtt IdentifyOutOfDatePipelinesFunction.Arn - PolicyName: "adf-deploy-cloudformation-delete" PolicyDocument: Version: "2012-10-17" @@ -388,7 +382,10 @@ Resources: PipelineManagementStateMachine: Type: "AWS::StepFunctions::StateMachine" Properties: - StateMachineName: "ADFPipelineManagementStateMachine" + RoleArn: !GetAtt StateMachineExecutionRole.Arn + StateMachineName: "adf-pipeline-management" + TracingConfiguration: + Enabled: true DefinitionString: !Sub |- { "Comment": "ADF Pipeline Management State Machine", @@ -523,8 +520,26 @@ Resources: "MaxAttempts": 12 } ], - "Next": "IdentifyOutOfDatePipelines" + "Next": "Success" }, + "Success": { + "Type": "Succeed" + } + } + } + + PipelineDeletionStateMachine: + Type: "AWS::StepFunctions::StateMachine" + Properties: + StateMachineName: "adf-pipeline-management-delete-outdated" + RoleArn: !GetAtt DeletionStateMachineExecutionRole.Arn + TracingConfiguration: + Enabled: true + DefinitionString: !Sub |- + { + "Comment": "Check if there are any outdated pipelines, if so, clean them up", + "StartAt": "IdentifyOutOfDatePipelines", + "States": { "IdentifyOutOfDatePipelines": { "Type": "Task", "Resource": "${IdentifyOutOfDatePipelinesFunction.Arn}", @@ -557,52 +572,15 @@ Resources: { "Variable": "$.pipelines_to_be_deleted", "IsPresent": true, - "Next": "InvokeDeleteStateMachine" + "Next": "Map" } ], "Default": "Success" }, - "InvokeDeleteStateMachine": { - "Type": "Task", - "Resource": "arn:${AWS::Partition}:states:::aws-sdk:sfn:startExecution", - "Parameters": { - "StateMachineArn": "${PipelineDeletionStateMachine}", - "Input.$": "$.pipelines_to_be_deleted", - "Name.$": "$.hash", - "TraceHeader.$": "$.traceroot" - }, - "Catch": [ - { - "ErrorEquals": [ - "Sfn.ExecutionAlreadyExistsException" - ], - "Next": "Success" - } - ], - "Next": "Success" - }, - "Success": { - "Type": "Succeed" - } - } - } - RoleArn: !GetAtt StateMachineExecutionRole.Arn - TracingConfiguration: - Enabled: true - - PipelineDeletionStateMachine: - Type: "AWS::StepFunctions::StateMachine" - Properties: - RoleArn: !GetAtt DeletionStateMachineExecutionRole.Arn - TracingConfiguration: - Enabled: true - DefinitionString: !Sub |- - { - "Comment": "Delete Stacks", - "StartAt": "Map", - "States": { "Map": { "Type": "Map", + "MaxConcurrency": 10, + "ItemsPath": "$.pipelines_to_be_deleted", "Iterator": { "StartAt": "DeleteStack", "States": { @@ -637,9 +615,7 @@ Resources: } } }, - "MaxConcurrency": 10, - "Next": "Success", - "ItemsPath": "$" + "Next": "Success" }, "Success": { "Type": "Succeed" @@ -661,9 +637,11 @@ Resources: - Name: ACCOUNT_ID Value: !Ref AWS::AccountId - Name: MANAGEMENT_ACCOUNT_ID - Value: !Ref RootAccountId + Value: !Ref ManagementAccountId - Name: S3_BUCKET_NAME Value: !Ref PipelineBucket + - Name: S3_BUCKET_KMS_KEY_ARN + Value: !Ref PipelineBucketKmsKeyArn - Name: SHARED_MODULES_BUCKET Value: !Ref SharedModulesBucket - Name: ADF_PIPELINE_PREFIX @@ -690,7 +668,7 @@ Resources: nodejs: 20 commands: - npm install aws-cdk@2.136.0 -g -y --quiet --no-progress - - aws s3 cp s3://$SHARED_MODULES_BUCKET/adf-build/ ./adf-build/ --recursive --quiet + - aws s3 cp s3://$SHARED_MODULES_BUCKET/adf-build/ ./adf-build/ --recursive --only-show-errors - pip install -r adf-build/requirements.txt -q -t ./adf-build - chmod 755 adf-build/cdk/execute_pipeline_stacks.py adf-build/cdk/generate_pipeline_stacks.py build: @@ -700,12 +678,13 @@ Resources: - cp definition.json cdk_inputs/definition.json - cdk synth --app adf-build/cdk/generate_pipeline_stacks.py -vv - python adf-build/cdk/execute_pipeline_stacks.py + Name: "adf-pipeline-management-deploy" ServiceRole: !GetAtt PipelineManagementCodeBuildProjectRole.Arn PipelineManagementCodeBuildProjectRole: Type: AWS::IAM::Role Properties: - Path: "/adf-automation/" + Path: "/adf/pipeline-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -715,86 +694,109 @@ Resources: - codebuild.amazonaws.com Action: - sts:AssumeRole + Condition: + ArnEquals: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:codebuild:${AWS::Region}:${AWS::AccountId}:project/adf-pipeline-management-deploy" + ManagedPolicyArns: + - !Ref ADFPipelineManagementLambdaBasePolicy Policies: - - PolicyName: "adf-retrieve-pipeline-definitions" + - PolicyName: "adf-pipeline-files" PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - - "s3:GetObject" - - "s3:GetObjectVersion" + - "s3:PutObject" + Resource: + - !Sub "arn:${AWS::Partition}:s3:::${PipelineBucket}/pipelines/*" + - Effect: Allow + Action: - "s3:ListBucket" Resource: - - !Sub "${ADFDefinitionBucket.Arn}/*" - - !Sub "${ADFDefinitionBucket.Arn}" - - PolicyName: "adf-retrieve-shared-modules" - PolicyDocument: - Version: "2012-10-17" - Statement: + - !Sub "${PipelineDefinitionBucket.Arn}" + Condition: + StringLike: + "s3:prefix": "pipelines/*" - Effect: Allow Action: - - "s3:GetObject" - - "s3:GetObjectVersion" - "s3:ListBucket" Resource: - - !Sub "arn:${AWS::Partition}:s3:::${SharedModulesBucket}/*" - !Sub "arn:${AWS::Partition}:s3:::${SharedModulesBucket}" - - PolicyName: "adf-deploy-cloudformation-createupdate" + Condition: + StringLike: + "s3:prefix": "adf-build/*" + - Effect: Allow + Action: + - "s3:GetObject" + Resource: + - !Sub "${PipelineDefinitionBucket.Arn}/pipelines/*" + - !Sub "arn:${AWS::Partition}:s3:::${SharedModulesBucket}/adf-build/*" + - Effect: Allow + Action: + - "kms:Decrypt" + - "kms:GenerateDataKey" + Resource: + - !Ref PipelineBucketKmsKeyArn + - PolicyName: "adf-deploy-cloudformation" PolicyDocument: Version: "2012-10-17" Statement: - - Effect: Allow + - Sid: "CloudFormationCreateUpdate" + Effect: Allow Action: - - cloudformation:CreateStack - - cloudformation:UpdateStack + - "cloudformation:CreateStack" + - "cloudformation:UpdateStack" Resource: - - "*" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${PipelinePrefix}*" Condition: StringEquals: 'aws:RequestTag/createdBy': "ADF" - - PolicyName: "adf-deploy-cloudformation-delete" - PolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow + - Sid: "CloudFormationDelete" + Effect: Allow Action: - - cloudformation:DeleteStack - - cloudformation:UpdateTerminationProtection + - "cloudformation:DeleteStack" + - "cloudformation:UpdateTerminationProtection" Resource: - - "*" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${PipelinePrefix}*" Condition: StringEquals: 'aws:ResourceTag/createdBy': "ADF" - - PolicyName: "adf-deploy-cloudformation-template" - PolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow + - Sid: "CloudFormationStackActions" + Effect: Allow Action: - - cloudformation:DescribeStacks - - cloudformation:CreateChangeSet - - cloudformation:DeleteChangeSet - - cloudformation:DescribeChangeSet - - cloudformation:ExecuteChangeSet - - cloudformation:SetStackPolicy - - cloudformation:ValidateTemplate + - "cloudformation:DescribeStacks" + - "cloudformation:CreateChangeSet" + - "cloudformation:DeleteChangeSet" + - "cloudformation:DescribeChangeSet" + - "cloudformation:ExecuteChangeSet" + - "cloudformation:SetStackPolicy" + Resource: + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${PipelinePrefix}*" + - Sid: "CloudFormationValidate" + Effect: Allow + Action: + - "cloudformation:ValidateTemplate" Resource: - "*" - - Effect: Allow - Sid: "PassRole" + - Sid: "CloudFormationPassRole" + Effect: Allow Action: - 'iam:PassRole' Resource: - !GetAtt ADFPipelineManagementCloudFormationRole.Arn Condition: - StringEqualsIfExists: + StringEquals: 'iam:PassedToService': - - cloudformation.amazonaws.com + - "cloudformation.amazonaws.com" + ArnLike: + 'iam:AssociatedResourceArn': + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${PipelinePrefix}*" ADFPipelineManagementCloudFormationRole: Type: AWS::IAM::Role Properties: + Path: "/adf/pipeline-management/" + RoleName: "adf-pipeline-deployment" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -804,7 +806,9 @@ Resources: - cloudformation.amazonaws.com Action: - sts:AssumeRole - Path: / + Condition: + ArnLike: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${PipelinePrefix}*" Policies: - PolicyName: "adf-codepipeline-creation" PolicyDocument: @@ -857,14 +861,87 @@ Resources: - "iam:GetRole" - "iam:GetRolePolicy" - "iam:PutRolePolicy" + - "iam:TagRole" + - "iam:UntagRole" Resource: - - !Sub arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-pipeline-* - - Effect: Allow - Sid: "AllowPassRole" + - !Sub arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${PipelinePrefix}* + Condition: + StringEquals: + 'aws:ResourceTag/createdBy': "ADF" + - Sid: "PassRoleToCodeBuild" + Effect: Allow + Action: + - 'iam:PassRole' + Resource: + - !Sub arn:${AWS::Partition}:iam::*:role/adf-codebuild-role + Condition: + StringEquals: + "aws:ResourceOrgID": !Ref OrganizationId + StringEqualsIfExists: + 'iam:PassedToService': + - "codebuild.amazonaws.com" + - Sid: "PassRoleToCodeCommit" + Effect: Allow Action: - - "iam:PassRole" + - 'iam:PassRole' Resource: - - !Sub arn:${AWS::Partition}:iam::*:role/* + - !Sub arn:${AWS::Partition}:iam::*:role/adf-codecommit-role + Condition: + StringEquals: + "aws:ResourceOrgID": !Ref OrganizationId + StringEqualsIfExists: + 'iam:PassedToService': + - "codecommit.amazonaws.com" + - "codepipeline.amazonaws.com" + - Sid: "PassRoleToCodePipeline" + Effect: Allow + Action: + - 'iam:PassRole' + Resource: + - !Sub arn:${AWS::Partition}:iam::*:role/adf-codepipeline-role + - !Sub arn:${AWS::Partition}:iam::*:role/adf-cloudformation-role + Condition: + StringEquals: + "aws:ResourceOrgID": !Ref OrganizationId + StringEqualsIfExists: + 'iam:PassedToService': + - "codepipeline.amazonaws.com" + - Sid: "PassRoleToOthers" + Effect: Allow + Action: + - 'iam:PassRole' + Resource: + - !Sub arn:${AWS::Partition}:iam::*:role/adf-cloudformation-role + Condition: + StringEquals: + "aws:ResourceOrgID": !Ref OrganizationId + StringEqualsIfExists: + 'iam:PassedToService': + - "codedeploy.amazonaws.com" + - Sid: "PassRoleToCloudFormation" + Effect: Allow + Action: + - 'iam:PassRole' + Resource: + - !Sub arn:${AWS::Partition}:iam::*:role/adf-cloudformation-deployment-role + Condition: + StringEquals: + "aws:ResourceOrgID": !Ref OrganizationId + StringEqualsIfExists: + 'iam:PassedToService': + - "cloudformation.amazonaws.com" + - Sid: "PassPipelineRoles" + Effect: Allow + Action: + - 'iam:PassRole' + Resource: + - !Sub arn:${AWS::Partition}:iam::*:role/${PipelinePrefix}* + Condition: + StringEquals: + "aws:ResourceOrgID": !Ref OrganizationId + StringEqualsIfExists: + 'iam:PassedToService': + - "events.amazonaws.com" - Effect: Allow Sid: "CodeBuildVPC" Action: @@ -925,42 +1002,46 @@ Resources: - Effect: Allow Action: - "lambda:CreateEventSourceMapping" - - "lambda:AddPermission" - "lambda:CreateFunction" - "lambda:DeleteFunction" - "lambda:GetFunction" - "lambda:GetFunctionConfiguration" - - "lambda:RemovePermission" - "lambda:UpdateFunctionCode" - "lambda:UpdateFunctionConfiguration" Resource: "*" - Effect: Allow Action: - - "iam:TagPolicy" - - "iam:TagRole" + - "lambda:AddPermission" + - "lambda:RemovePermission" Resource: "*" + Condition: + StringEquals: + "lambda:Principal": + - "codepipeline.amazonaws.com" + - "events.amazonaws.com" + - "sns.amazonaws.com" + - "states.amazonaws.com" DeploymentMapProcessingFunction: Type: 'AWS::Serverless::Function' Properties: Handler: process_deployment_map.lambda_handler - Description: "ADF Lambda Function - Deployment Map Processing" + Description: "ADF - Pipeline Management - Deployment Map Processing" Environment: Variables: ACCOUNT_ID: !Ref AWS::AccountId ORGANIZATION_ID: !Ref OrganizationId ADF_VERSION: !Ref ADFVersion ADF_LOG_LEVEL: !Ref ADFLogLevel - PIPELINE_MANAGEMENT_STATE_MACHINE: !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:ADFPipelineManagementStateMachine" - ADF_ROLE_NAME: !Ref CrossAccountAccessRole - FunctionName: DeploymentMapProcessorFunction + PIPELINE_MANAGEMENT_STATE_MACHINE: !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:adf-pipeline-management" + FunctionName: adf-pipeline-management-deployment-map-processor Role: !GetAtt DeploymentMapProcessingLambdaRole.Arn Events: S3Event: Type: S3 Properties: Bucket: - Ref: ADFPipelineBucket + Ref: PipelineManagementBucket Events: s3:ObjectCreated:* Metadata: BuildMethod: python3.12 @@ -976,24 +1057,23 @@ Resources: Action: - "sts:AssumeRole" Resource: !Sub "arn:${AWS::Partition}:iam::*:role/adf-automation-role" - Roles: - - !Ref CreateOrUpdateRuleLambdaRole - - !Ref CreateRepositoryLambdaRole + Condition: + StringEquals: + "aws:ResourceOrgID": !Ref OrganizationId CreateOrUpdateRuleFunction: Type: 'AWS::Serverless::Function' Properties: Handler: create_or_update_rule.lambda_handler - Description: "ADF Lambda Function - Create or Update rule" + Description: "ADF - Pipeline Management - Create or Update Rule" Environment: Variables: ACCOUNT_ID: !Ref AWS::AccountId ORGANIZATION_ID: !Ref OrganizationId ADF_VERSION: !Ref ADFVersion ADF_LOG_LEVEL: !Ref ADFLogLevel - ADF_ROLE_NAME: !Ref CrossAccountAccessRole S3_BUCKET_NAME: !Ref PipelineBucket - FunctionName: ADFPipelineCreateOrUpdateRuleFunction + FunctionName: adf-pipeline-management-create-update-rule Role: !GetAtt CreateOrUpdateRuleLambdaRole.Arn Metadata: BuildMethod: python3.12 @@ -1002,16 +1082,15 @@ Resources: Type: 'AWS::Serverless::Function' Properties: Handler: create_repository.lambda_handler - Description: "ADF Lambda Function - Create Repository" + Description: "ADF - Pipeline Management - Create Repository" Environment: Variables: ACCOUNT_ID: !Ref AWS::AccountId ORGANIZATION_ID: !Ref OrganizationId ADF_VERSION: !Ref ADFVersion ADF_LOG_LEVEL: !Ref ADFLogLevel - ADF_ROLE_NAME: !Ref CrossAccountAccessRole S3_BUCKET_NAME: !Ref PipelineBucket - FunctionName: ADFPipelineCreateRepositoryFunction + FunctionName: adf-pipeline-management-create-repository Role: !GetAtt CreateRepositoryLambdaRole.Arn Metadata: BuildMethod: python3.12 @@ -1020,17 +1099,16 @@ Resources: Type: 'AWS::Serverless::Function' Properties: Handler: generate_pipeline_inputs.lambda_handler - Description: "ADF Lambda Function - Generate Pipeline Inputs" + Description: "ADF - Pipeline Management - Generate Pipeline Inputs" Environment: Variables: ACCOUNT_ID: !Ref AWS::AccountId ORGANIZATION_ID: !Ref OrganizationId ADF_VERSION: !Ref ADFVersion ADF_LOG_LEVEL: !Ref ADFLogLevel - ADF_ROLE_NAME: !Ref CrossAccountAccessRole S3_BUCKET_NAME: !Ref PipelineBucket - ROOT_ACCOUNT_ID: !Ref RootAccountId - FunctionName: ADFPipelineGenerateInputsFunction + MANAGEMENT_ACCOUNT_ID: !Ref ManagementAccountId + FunctionName: adf-pipeline-management-generate-pipeline-inputs Role: !GetAtt GeneratePipelineInputsLambdaRole.Arn Metadata: BuildMethod: python3.12 @@ -1039,17 +1117,16 @@ Resources: Type: 'AWS::Serverless::Function' Properties: Handler: store_pipeline_definition.lambda_handler - Description: "ADF Lambda Function - Store Pipeline Definition" + Description: "ADF - Pipeline Management - Store Pipeline Definition" Environment: Variables: ACCOUNT_ID: !Ref AWS::AccountId ORGANIZATION_ID: !Ref OrganizationId ADF_VERSION: !Ref ADFVersion ADF_LOG_LEVEL: !Ref ADFLogLevel - ADF_ROLE_NAME: !Ref CrossAccountAccessRole - S3_BUCKET_NAME: !Ref ADFDefinitionBucket - ROOT_ACCOUNT_ID: !Ref RootAccountId - FunctionName: ADFPipelineStoreDefinitionFunction + S3_BUCKET_NAME: !Ref PipelineDefinitionBucket + MANAGEMENT_ACCOUNT_ID: !Ref ManagementAccountId + FunctionName: adf-pipeline-management-store-pipeline-definition Role: !GetAtt StoreDefinitionLambdaRole.Arn Metadata: BuildMethod: python3.12 @@ -1058,23 +1135,22 @@ Resources: Type: 'AWS::Serverless::Function' Properties: Handler: identify_out_of_date_pipelines.lambda_handler - Description: "ADF Lambda Function - Identify Out Of Date Pipelines" + Description: "ADF - Pipeline Management - Identify Out Of Date Pipelines" Environment: Variables: ACCOUNT_ID: !Ref AWS::AccountId ORGANIZATION_ID: !Ref OrganizationId ADF_VERSION: !Ref ADFVersion ADF_LOG_LEVEL: !Ref ADFLogLevel - ADF_ROLE_NAME: !Ref CrossAccountAccessRole - ROOT_ACCOUNT_ID: !Ref RootAccountId - S3_BUCKET_NAME: !Ref ADFPipelineBucket + MANAGEMENT_ACCOUNT_ID: !Ref ManagementAccountId + S3_BUCKET_NAME: !Ref PipelineManagementBucket ADF_PIPELINE_PREFIX: !Ref PipelinePrefix - FunctionName: ADFPipelineIdentifyOutOfDatePipelinesFunction + FunctionName: adf-pipeline-management-identify-out-of-date-pipelines Role: !GetAtt IdentifyOutOfDatePipelinesLambdaRole.Arn Metadata: BuildMethod: python3.12 - ADFDefinitionBucket: + PipelineDefinitionBucket: Type: "AWS::S3::Bucket" DeletionPolicy: Retain UpdateReplacePolicy: Retain @@ -1100,9 +1176,9 @@ Resources: Properties: Name: "/adf/pipeline_definition_bucket" Type: "String" - Value: !Ref ADFDefinitionBucket + Value: !Ref PipelineDefinitionBucket - ADFPipelineBucket: + PipelineManagementBucket: Type: "AWS::S3::Bucket" DeletionPolicy: Retain UpdateReplacePolicy: Retain @@ -1141,10 +1217,10 @@ Resources: Outputs: Bucket: - Value: !Ref ADFPipelineBucket + Value: !Ref PipelineManagementBucket DefinitionBucket: - Value: !Ref ADFDefinitionBucket + Value: !Ref PipelineDefinitionBucket CreateOrUpdateRuleLambdaRoleArn: Value: !GetAtt CreateOrUpdateRuleLambdaRole.Arn diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/regional.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/regional.yml index e6542a62f..f7d8eaf13 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/regional.yml +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/regional.yml @@ -23,7 +23,6 @@ Resources: DeletionPolicy: Retain UpdateReplacePolicy: Retain Properties: - AccessControl: BucketOwnerFullControl OwnershipControls: Rules: - ObjectOwnership: BucketOwnerEnforced @@ -46,10 +45,7 @@ Resources: PolicyDocument: Statement: - Action: - - s3:Get* - - s3:List* - - s3:PutObject* - - s3:PutReplicationConfiguration + - s3:GetObject Effect: Allow Condition: StringEquals: @@ -59,6 +55,62 @@ Resources: - !Sub arn:${AWS::Partition}:s3:::${DeploymentFrameworkRegionalS3Bucket}/* Principal: AWS: "*" + - Action: + - s3:GetObject + - s3:ListBucket + - s3:PutObject + Effect: Allow + Resource: + - !Sub arn:${AWS::Partition}:s3:::${DeploymentFrameworkRegionalS3Bucket} + - !Sub arn:${AWS::Partition}:s3:::${DeploymentFrameworkRegionalS3Bucket}/* + Principal: + AWS: !Sub arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role + - Sid: "DenyUnencryptedObjects" + Action: + - "s3:PutObject" + Effect: Deny + Condition: + StringNotEquals: + "s3:x-amz-server-side-encryption": "aws:kms" + Resource: + - !Sub arn:${AWS::Partition}:s3:::${DeploymentFrameworkRegionalS3Bucket}/* + Principal: + AWS: "*" + - Sid: "DenyDifferentKMSKey" + Action: + - "s3:PutObject" + Effect: Deny + Condition: + ArnNotEqualsIfExists: + "s3:x-amz-server-side-encryption-aws-kms-key-id": !GetAtt DeploymentFrameworkRegionalKMSKey.Arn + Resource: + - !Sub arn:${AWS::Partition}:s3:::${DeploymentFrameworkRegionalS3Bucket}/* + Principal: + AWS: "*" + - Sid: "DenyInsecureConnections" + Action: + - "s3:*" + Effect: Deny + Condition: + Bool: + aws:SecureTransport: "false" + Resource: + - !Sub arn:${AWS::Partition}:s3:::${DeploymentFrameworkRegionalS3Bucket} + - !Sub arn:${AWS::Partition}:s3:::${DeploymentFrameworkRegionalS3Bucket}/* + Principal: + AWS: "*" + - Sid: "DenyInsecureTLS" + Action: + - "s3:*" + Effect: Deny + Condition: + NumericLessThan: + "s3:TlsVersion": "1.2" + Resource: + - !Sub arn:${AWS::Partition}:s3:::${DeploymentFrameworkRegionalS3Bucket} + - !Sub arn:${AWS::Partition}:s3:::${DeploymentFrameworkRegionalS3Bucket}/* + Principal: + AWS: "*" DeploymentFrameworkRegionalKMSKey: Type: AWS::KMS::Key @@ -101,9 +153,6 @@ Resources: Action: - kms:Decrypt - kms:DescribeKey - - kms:Encrypt - - kms:GenerateDataKey* - - kms:ReEncrypt* Resource: "*" Condition: StringEquals: @@ -126,13 +175,13 @@ Resources: Outputs: DeploymentFrameworkRegionalS3Bucket: - Description: The S3 Bucket used for cross region codepipeline deployments + Description: The S3 Bucket used for cross-region CodePipeline deployments Value: !Ref DeploymentFrameworkRegionalS3Bucket Export: Name: !Sub "S3Bucket-${AWS::Region}" DeploymentFrameworkRegionalKMSKey: - Description: The KMSKey used for cross region codepipeline deployments + Description: The KMS Key used for cross-region CodePipeline deployments Value: !GetAtt DeploymentFrameworkRegionalKMSKey.Arn Export: Name: !Sub "KMSArn-${AWS::Region}" diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/example-global-iam.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/example-global-iam.yml index f252b240f..9e7f624fe 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/example-global-iam.yml +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/example-global-iam.yml @@ -48,14 +48,9 @@ Resources: - Effect: Allow Sid: "CloudFormation" Action: - # These below actions are examples, change these to your requirements.. - - "apigateway:*" - - "cloudformation:*" # You will need CloudFormation actions in order to work with CloudFormation - - "logs:*" - - "codedeploy:*" - - "autoscaling:*" - - "cloudwatch:*" - - "elasticloadbalancing:*" + # These are example actions, please update these to the least privilege policy required: + - "cloudwatch:PutMetricAlarm" + - "logs:CreateLogGroup" Resource: - "*" Roles: @@ -71,6 +66,7 @@ Resources: # # Uncomment this line if you want to enable the terraform extensions # Type: AWS::IAM::Role # Properties: +# Path: / # RoleName: "adf-terraform-role" # AssumeRolePolicyDocument: # Version: "2012-10-17" @@ -86,7 +82,6 @@ Resources: # AWS: !Sub arn:${AWS::Partition}:iam::${DeploymentAccountId}:root # Action: # - sts:AssumeRole -# Path: / # # ADFTerraformPolicy: # Type: AWS::IAM::Policy @@ -118,6 +113,7 @@ Resources: # # Am example custom role that you would need to create in order to deploy custom resources in other AWS Accounts within the organization. # Type: AWS::IAM::Role # Properties: +# Path: / # RoleName: "adf-custom-deploy-role" # AssumeRolePolicyDocument: # Version: "2012-10-17" @@ -141,7 +137,7 @@ Resources: # MyExampleCustomRolePolicy: # Type: AWS::IAM::Policy # Properties: -# PolicyName: "adf-custom-deploy-role-policy" +# PolicyName: "adf-pipeline-custom-deploy-policy" # PolicyDocument: # Version: "2012-10-17" # Statement: diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/global.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/global.yml index d94667439..12a9bcee3 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/global.yml +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/global.yml @@ -22,6 +22,16 @@ Parameters: Description: Deployment Bucket Name Default: /adf/bucket_name + ManagementAccountId: + Type: "AWS::SSM::Parameter::Value" + Description: Management Account ID + Default: /adf/management_account_id + + BootstrapTemplatesBucketName: + Type: "AWS::SSM::Parameter::Value" + Description: Bootstrap Templates Bucket Name + Default: /adf/bootstrap_templates_bucket + Resources: CodeCommitRole: # This role is used to connect the Pipeline in the deployment account to CodeCommit in @@ -29,6 +39,7 @@ Resources: # OU you can target this more specifically and remove it from the global.yml Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-codecommit-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -41,13 +52,6 @@ Resources: AWS: !Sub arn:${AWS::Partition}:iam::${DeploymentAccountId}:root Action: - sts:AssumeRole - - Effect: Allow - Principal: - Service: - - events.amazonaws.com - Action: - - sts:AssumeRole - Path: / CodeCommitPolicy: Type: AWS::IAM::Policy @@ -69,22 +73,16 @@ Resources: Resource: "*" - Effect: Allow Action: - - "s3:Get*" - - "s3:List*" - - "s3:Put*" + - "s3:PutObject" Resource: - - !Sub arn:${AWS::Partition}:s3:::${DeploymentAccountBucketName} - !Sub arn:${AWS::Partition}:s3:::${DeploymentAccountBucketName}/* - Effect: Allow Action: - "kms:Decrypt" - - "kms:Describe*" - - "kms:DescribeKey" - "kms:Encrypt" - - "kms:GenerateDataKey*" - - "kms:Get*" - - "kms:List*" - - "kms:ReEncrypt*" + - "kms:GenerateDataKey" + - "kms:ReEncryptFrom" + - "kms:ReEncryptTo" Resource: !Ref KMSArn Roles: - !Ref CodeCommitRole @@ -99,17 +97,42 @@ Resources: - Effect: Allow Sid: "CloudFormation" Action: - - cloudformation:* - - codedeploy:* - - iam:PassRole + - cloudformation:ValidateTemplate + - cloudformation:CreateStack + - cloudformation:DeleteStack + - cloudformation:DescribeStackEvents + - cloudformation:DescribeStacks + - cloudformation:UpdateStack + - cloudformation:CreateChangeSet + - cloudformation:DeleteChangeSet + - cloudformation:DescribeChangeSet + - cloudformation:ExecuteChangeSet + - cloudformation:SetStackPolicy + - cloudformation:ValidateTemplate + - codedeploy:CreateDeployment + - codedeploy:GetApplicationRevision + - codedeploy:GetDeployment + - codedeploy:GetDeploymentConfig + - codedeploy:RegisterApplicationRevision - servicecatalog:CreateProvisioningArtifact - servicecatalog:DeleteProvisioningArtifact - servicecatalog:DescribeProvisioningArtifact - servicecatalog:ListProvisioningArtifacts - servicecatalog:UpdateProduct Resource: "*" + - Effect: Allow + Sid: "PassRole" + Action: + - "iam:PassRole" + Resource: + - !GetAtt CloudFormationDeploymentRole.Arn + Condition: + StringEqualsIfExists: + "iam:PassedToService": + - "cloudformation.amazonaws.com" Roles: - !Ref CloudFormationRole + CloudFormationKMSPolicy: Type: AWS::IAM::Policy Properties: @@ -124,7 +147,8 @@ Resources: - kms:DescribeKey - kms:Encrypt - kms:GenerateDataKey* - - kms:ReEncrypt* + - kms:ReEncryptFrom + - kms:ReEncryptTo Resource: !Ref KMSArn Roles: - !Ref CloudFormationRole @@ -139,9 +163,9 @@ Resources: - Effect: Allow Sid: "S3" Action: - - s3:Get* - - s3:List* - - s3:Put* + - s3:GetObject* + - s3:ListBucket + - s3:PutObject* Resource: - !Sub arn:${AWS::Partition}:s3:::${DeploymentAccountBucketName} - !Sub arn:${AWS::Partition}:s3:::${DeploymentAccountBucketName}/* @@ -151,6 +175,7 @@ Resources: CloudFormationRole: Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-cloudformation-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -165,7 +190,6 @@ Resources: - !Sub arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf-cloudformation-role Action: - sts:AssumeRole - Path: / CloudFormationDeploymentPolicy: # This is the policy that will be used to deploy CloudFormation resources from @@ -186,7 +210,8 @@ Resources: - "kms:DescribeKey" - "kms:Encrypt" - "kms:GenerateDataKey*" - - "kms:ReEncrypt*" + - "kms:ReEncryptFrom" + - "kms:ReEncryptTo" Resource: !Ref "KMSArn" Roles: - !Ref CloudFormationDeploymentRole @@ -194,6 +219,7 @@ Resources: CloudFormationDeploymentRole: Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-cloudformation-deployment-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -205,23 +231,17 @@ Resources: - cloudformation.amazonaws.com Action: - sts:AssumeRole - - Effect: Allow - Sid: "AssumeRole" - Principal: - AWS: - - !Sub arn:${AWS::Partition}:iam::${DeploymentAccountId}:root - Action: - - sts:AssumeRole Condition: - ArnEquals: - "aws:SourceArn": !Sub "arn:${AWS::Partition}:codepipeline:${AWS::Region}:${DeploymentAccountId}:*" - "aws:PrincipalArn": !Sub "arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf-codepipeline-role" - Path: / + StringEqualsIfExists: + "aws:SourceAccount": + - !Ref AWS::AccountId + - !Ref DeploymentAccountId UpdateCrossAccountAccessByDeploymentAccountRole: Type: AWS::IAM::Role Properties: - RoleName: "adf-update-cross-account-access-role" + Path: /adf/bootstrap/ + RoleName: "adf-update-cross-account-access" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -229,14 +249,13 @@ Resources: Sid: "AssumeRoleByEnableCrossAccountLambda" Condition: ArnEquals: - "aws:PrincipalArn": !Sub arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf-enable-cross-account-access-lambda-role + "aws:PrincipalArn": !Sub arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf/bootstrap/adf-bootstrap-pipeline-enable-cross-account-access-role Principal: AWS: !Sub arn:${AWS::Partition}:iam::${DeploymentAccountId}:root Action: - sts:AssumeRole - Path: / Policies: - - PolicyName: "adf-allow-updating-cross-account-roles" + - PolicyName: "adf-pipeline-allow-updating-cross-accounts" PolicyDocument: Version: "2012-10-17" Statement: @@ -255,6 +274,7 @@ Resources: # than 'aws-deployment-framework-pipelines' Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-automation-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -264,14 +284,13 @@ Resources: Condition: ArnEquals: "aws:PrincipalArn": - - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf-automation/adf-pipeline-create-update-rule" - - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf-automation/adf-pipeline-create-repository" + - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf/pipeline-management/adf-pipeline-management-create-update-rule" + - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf/pipeline-management/adf-pipeline-management-create-repository" Principal: AWS: - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccountId}:root" Action: - sts:AssumeRole - Path: / AdfAutomationRolePolicy: Type: AWS::IAM::Policy @@ -339,18 +358,51 @@ Resources: - !Sub "arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/adf/deployment_account_id" - !Sub "arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/adf/kms_arn" - Effect: Allow - Sid: "IAM" + Sid: "IAMCleanupV3LegacyRoles" + Action: + - "iam:DeleteRole" + - "iam:DeleteRolePolicy" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-event-rule-${AWS::AccountId}-${DeploymentAccountId}-EventRole-*" + - Effect: Allow + Sid: "IAMFullPathOnly" Action: - - "iam:AttachRolePolicy" - "iam:CreateRole" - "iam:DeleteRole" + - "iam:TagRole" + - "iam:UntagRole" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/cross-account-events/adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId}" + - Effect: Allow + Sid: "IAMFullPathAndNameOnly" + Action: - "iam:DeleteRolePolicy" - "iam:GetRole" - "iam:GetRolePolicy" - - "iam:PassRole" - "iam:PutRolePolicy" Resource: - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-event-rule-${AWS::AccountId}-${DeploymentAccountId}-EventRole-*" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/cross-account-events/adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId}" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId}" + - Effect: Allow + Sid: "IAMPassRole" + Action: + - "iam:PassRole" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/cross-account-events/adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId}" + Condition: + StringEquals: + 'iam:PassedToService': + - "events.amazonaws.com" + ArnEquals: + 'iam:AssociatedResourceArn': + - !Sub "arn:${AWS::Partition}:events:${AWS::Region}:${AWS::AccountId}:rule/adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId}" + - Effect: Allow + Sid: "KMS" + Action: + # These are required for cross account deployments via CodePipeline. + - "kms:Decrypt" + - "kms:DescribeKey" + Resource: !Ref KMSArn Roles: - !Ref AdfAutomationRole @@ -363,6 +415,7 @@ Resources: # in order to facilitate this scenario. Type: AWS::IAM::Role Properties: + Path: / RoleName: "adf-readonly-automation-role" AssumeRolePolicyDocument: Version: "2012-10-17" @@ -377,7 +430,6 @@ Resources: - !Sub arn:${AWS::Partition}:iam::${DeploymentAccountId}:root Action: - sts:AssumeRole - Path: / ReadOnlyAutomationRolePolicy: Type: AWS::IAM::Policy @@ -397,3 +449,149 @@ Resources: - "*" Roles: - !Ref ReadOnlyAutomationRole + + BootstrapTestRole: + # This role is used to test whether the AWS Account is bootstrapped or not. + # Do not attach any policies to this role. + Type: AWS::IAM::Role + Properties: + Path: /adf/bootstrap/ + RoleName: "adf-bootstrap-test-role" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Condition: + ArnEquals: + "aws:PrincipalArn": !Sub "arn:${AWS::Partition}:iam::${ManagementAccountId}:role/adf/account-bootstrapping/jump-manager/adf-bootstrapping-jump-manager-role" + Principal: + AWS: !Sub "arn:${AWS::Partition}:iam::${ManagementAccountId}:root" + Action: + - sts:AssumeRole + Policies: + - PolicyName: "lock-down-for-assumerole-test-only" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Deny + Action: "*" + Resource: "*" + + BootstrapUpdateDeploymentRole: + # This role is used to test whether the AWS Account is bootstrapped or not. + # Do not attach any policies to this role. + Type: AWS::IAM::Role + Properties: + Path: /adf/bootstrap/ + RoleName: "adf-bootstrap-update-deployment-role" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Condition: + ArnEquals: + "aws:PrincipalArn": !Sub "arn:${AWS::Partition}:iam::${ManagementAccountId}:role/adf/account-bootstrapping/jump/adf-bootstrapping-cross-account-jump-role" + Principal: + AWS: !Sub "arn:${AWS::Partition}:iam::${ManagementAccountId}:root" + Action: + - sts:AssumeRole + Policies: + - PolicyName: "allow-updates-to-bootstrap-stacks" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Action: + - "cloudformation:CancelUpdateStack" + - "cloudformation:ContinueUpdateRollback" + - "cloudformation:DeleteChangeSet" + - "cloudformation:DeleteStack" + - "cloudformation:DescribeChangeSet" + - "cloudformation:DescribeStacks" + - "cloudformation:SetStackPolicy" + - "cloudformation:SignalResource" + - "cloudformation:UpdateTerminationProtection" + Resource: + # Across all regions, as it needs to be able to find and + # cleanup global stacks in non-global regions: + - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/adf-global-base-*/*" + - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/adf-regional-base-*/*" + - Effect: "Allow" + Action: + - "cloudformation:CreateChangeSet" + - "cloudformation:CreateStack" + - "cloudformation:CreateUploadBucket" + - "cloudformation:ExecuteChangeSet" + - "cloudformation:TagResource" + - "cloudformation:UntagResource" + - "cloudformation:UpdateStack" + Resource: + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-bootstrap/*" + - !Sub "arn:${AWS::Partition}:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/adf-global-base-iam/*" + - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/adf-regional-base-bootstrap/*" + - Effect: "Allow" + Action: + - "cloudformation:ListStacks" + - "cloudformation:ValidateTemplate" + - "ec2:DeleteInternetGateway" + - "ec2:DeleteNetworkInterface" + - "ec2:DeleteRouteTable" + - "ec2:DeleteSubnet" + - "ec2:DeleteVpc" + - "ec2:DescribeInternetGateways" + - "ec2:DescribeNetworkInterfaces" + - "ec2:DescribeRegions" + - "ec2:DescribeRouteTables" + - "ec2:DescribeSubnets" + - "ec2:DescribeVpcs" + - "iam:CreateAccountAlias" + - "iam:DeleteAccountAlias" + - "iam:ListAccountAliases" + Resource: + - "*" + - Effect: "Allow" + Action: + - "ssm:GetParameters" + - "ssm:GetParameter" + - "ssm:PutParameter" + Resource: + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/*" + - Effect: "Allow" + Action: + - "iam:CreateRole" + - "iam:DeleteRole" + - "iam:TagRole" + - "iam:UntagRole" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codecommit-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-readonly-automation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-terraform-role" + - Effect: "Allow" + Action: + - "iam:DeleteRolePolicy" + - "iam:GetRole" + - "iam:GetRolePolicy" + - "iam:PutRolePolicy" + - "iam:UpdateAssumeRolePolicy" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codecommit-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-readonly-automation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-terraform-role" + - Sid: "IAMGetOnly" + Effect: "Allow" + Action: + - "iam:GetRole" + - "iam:GetRolePolicy" + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-automation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-bootstrap-*" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap/*" + - Effect: "Allow" + Action: + - "s3:GetObject" + Resource: + - !Sub "arn:${AWS::Partition}:s3:::${BootstrapTemplatesBucketName}/adf-bootstrap/*" diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/config.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/config.py index 9af699045..0b9b99d44 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/config.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/config.py @@ -43,6 +43,18 @@ def __init__(self, parameter_store=None, config_path=None): self.extensions = None self._load_config_file() + def sorted_regions(self): + target_regions_except_deploy = sorted(list( + set(self.target_regions) + - set([self.deployment_account_region]) + )) + return [ + # Make sure we start with the main deployment region + self.deployment_account_region, + # Followed by all other target regions configured + *target_regions_except_deploy, + ] + def store_config(self): self._store_config() self._store_cross_region_config() @@ -98,11 +110,11 @@ def _load_config_file(self): if os.path.exists(org_config_path): with open(org_config_path, encoding="utf-8") as org_config_file: LOGGER.info("Using organization specific ADF config: %s", org_config_path) - self.config_contents = yaml.load(org_config_file, Loader=yaml.FullLoader) + self.config_contents = yaml.safe_load(org_config_file) else: LOGGER.info("Using default ADF config: %s", self.config_path) with open(self.config_path, encoding="utf-8") as config: - self.config_contents = yaml.load(config, Loader=yaml.FullLoader) + self.config_contents = yaml.safe_load(config) self._parse_config() def _parse_config(self): @@ -184,6 +196,14 @@ def _store_config(self): ): self.parameters_client.put_parameter(key, str(value)) + for move in self.config.get('moves', []): + move_param_name = move.get('name', '').replace('-', '_') + if move_param_name and move.get('action'): + self.parameters_client.put_parameter( + f"moves/{move_param_name}/action", + str(move.get('action')), + ) + for extension, attributes in self.extensions.items(): for attribute in attributes: self.parameters_client.put_parameter( diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/global.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/global.yml deleted file mode 100644 index 210b3b935..000000000 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/global.yml +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright Amazon.com Inc. or its affiliates. -# SPDX-License-Identifier: Apache-2.0 - -AWSTemplateFormatVersion: "2010-09-09" -Description: >- - ADF CloudFormation Template - Role to be assumed by CodePipeline in Deployment Account - -Parameters: - DeploymentAccountId: - Type: "AWS::SSM::Parameter::Value" - Description: Deployment Account ID - Default: /adf/deployment_account_id - - CrossAccountAccessRole: - Type: "AWS::SSM::Parameter::Value" - Description: The role used to allow cross account access - Default: /adf/cross_account_access_role - -Resources: - OrganizationsReadOnlyRole: - Type: AWS::IAM::Role - Properties: - RoleName: !Sub "${CrossAccountAccessRole}-readonly" - AssumeRolePolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow - Principal: - AWS: - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codebuild-role" - - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf-codebuild-role" - - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccountId}:role/adf-automation/adf-pipeline-provisioner-generate-inputs" - Action: - - sts:AssumeRole - Path: / - - OrganizationsReadOnlyPolicy: - Type: AWS::IAM::Policy - Properties: - PolicyName: "adf-organizations-readonly-policy" - PolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow - Action: - - organizations:ListAccounts - - organizations:ListAccountsForParent - - organizations:DescribeAccount - - organizations:ListOrganizationalUnitsForParent - - organizations:ListRoots - - organizations:ListChildren - - tag:GetResources - Resource: "*" - Roles: - - !Ref OrganizationsReadOnlyRole - - OrganizationsRole: - # Only required if you intend to bootstrap the management account. - Type: AWS::IAM::Role - Properties: - RoleName: !Ref CrossAccountAccessRole - AssumeRolePolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow - Principal: - AWS: - # To update the management account: - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:root" - Action: - - sts:AssumeRole - Path: / - - OrganizationsPolicy: - # Only required if you intend to bootstrap the management account. - Type: AWS::IAM::Policy - Properties: - PolicyName: "adf-management-account-bootstrap-policy" - PolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow - Action: - - cloudformation:CancelUpdateStack - - cloudformation:ContinueUpdateRollback - - cloudformation:CreateChangeSet - - cloudformation:CreateStack - - cloudformation:CreateUploadBucket - - cloudformation:DeleteChangeSet - - cloudformation:DeleteStack - - cloudformation:DescribeChangeSet - - cloudformation:DescribeStacks - - cloudformation:ExecuteChangeSet - - cloudformation:ListStacks - - cloudformation:SetStackPolicy - - cloudformation:SignalResource - - cloudformation:UpdateStack - - cloudformation:UpdateTerminationProtection - Resource: - - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/*" - - Effect: Allow - Action: - - cloudformation:ValidateTemplate - - ec2:DeleteInternetGateway - - ec2:DeleteNetworkInterface - - ec2:DeleteRouteTable - - ec2:DeleteSubnet - - ec2:DeleteVpc - - ec2:DescribeInternetGateways - - ec2:DescribeNetworkInterfaces - - ec2:DescribeRegions - - ec2:DescribeRouteTables - - ec2:DescribeSubnets - - ec2:DescribeVpcs - - iam:CreateAccountAlias - - iam:DeleteAccountAlias - - iam:ListAccountAliases - Resource: - - "*" - - Effect: Allow - Action: - - ssm:PutParameter - - ssm:GetParameters - - ssm:GetParameter - Resource: - - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/*" - - Effect: Allow - Action: - - iam:CreatePolicy - - iam:CreateRole - - iam:DeleteRole - - iam:DeleteRolePolicy - - iam:GetRole - - iam:GetRolePolicy - - iam:PutRolePolicy - - iam:TagRole - - iam:UntagRole - - iam:UpdateAssumeRolePolicy - Resource: - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-deployment-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codecommit-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-automation-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-readonly-automation-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-update-cross-account-access-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-terraform-role" - - Effect: "Allow" - Action: - - iam:DeleteRole - - iam:DeleteRolePolicy - - iam:UntagRole - Resource: - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${CrossAccountAccessRole}" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${CrossAccountAccessRole}-readonly" - Roles: - - !Ref OrganizationsRole diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/main.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/main.py index 06e78b928..a22c9375f 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/main.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/main.py @@ -15,14 +15,13 @@ import boto3 -from botocore.exceptions import ClientError from logger import configure_logger from cache import Cache from cloudformation import CloudFormation from parameter_store import ParameterStore from organizations import Organizations from stepfunctions import StepFunctions -from errors import GenericAccountConfigureError, ParameterNotFoundError +from errors import GenericAccountConfigureError, ParameterNotFoundError, Error from sts import STS from s3 import S3 from partition import get_partition @@ -33,10 +32,10 @@ S3_BUCKET_NAME = os.environ["S3_BUCKET"] REGION_DEFAULT = os.environ["AWS_REGION"] PARTITION = get_partition(REGION_DEFAULT) -ACCOUNT_ID = os.environ["MANAGEMENT_ACCOUNT_ID"] +MANAGEMENT_ACCOUNT_ID = os.environ["MANAGEMENT_ACCOUNT_ID"] ADF_VERSION = os.environ["ADF_VERSION"] ADF_LOG_LEVEL = os.environ["ADF_LOG_LEVEL"] -DEPLOYMENT_ACCOUNT_S3_BUCKET_NAME = os.environ["DEPLOYMENT_ACCOUNT_BUCKET"] +SHARED_MODULES_BUCKET_NAME = os.environ["SHARED_MODULES_BUCKET"] CODEPIPELINE_EXECUTION_ID = os.environ.get("CODEPIPELINE_EXECUTION_ID") CODEBUILD_START_TIME_UNIXTS = floor( int( @@ -65,14 +64,13 @@ def ensure_generic_account_can_be_setup(sts, config, account_id): """ If the target account has been configured returns the role to assume """ - try: - return sts.assume_cross_account_role( - f'arn:{PARTITION}:iam::{account_id}:role/' - f'{config.cross_account_access_role}', - 'base_update' - ) - except ClientError as error: - raise GenericAccountConfigureError from error + return sts.assume_bootstrap_deployment_role( + PARTITION, + MANAGEMENT_ACCOUNT_ID, + account_id, + config.cross_account_access_role, + 'base_update', + ) def update_deployment_account_output_parameters( @@ -117,13 +115,14 @@ def prepare_deployment_account(sts, deployment_account_id, config): and returns the role that can be assumed by the management account to access the deployment account """ - deployment_account_role = sts.assume_cross_account_role( - f'arn:{PARTITION}:iam::{deployment_account_id}:role/' - f'{config.cross_account_access_role}', - 'management' + deployment_account_role = sts.assume_bootstrap_deployment_role( + PARTITION, + MANAGEMENT_ACCOUNT_ID, + deployment_account_id, + config.cross_account_access_role, + 'management', ) - for region in sorted(list( - set([config.deployment_account_region] + config.target_regions))): + for region in config.sorted_regions(): deployment_account_parameter_store = ParameterStore( region, deployment_account_role @@ -141,8 +140,12 @@ def prepare_deployment_account(sts, deployment_account_id, config): config.cross_account_access_role, ) deployment_account_parameter_store.put_parameter( - 'deployment_account_bucket', - DEPLOYMENT_ACCOUNT_S3_BUCKET_NAME, + 'shared_modules_bucket', + SHARED_MODULES_BUCKET_NAME, + ) + deployment_account_parameter_store.put_parameter( + 'bootstrap_templates_bucket', + S3_BUCKET_NAME, ) deployment_account_parameter_store.put_parameter( 'deployment_account_id', @@ -150,7 +153,7 @@ def prepare_deployment_account(sts, deployment_account_id, config): ) deployment_account_parameter_store.put_parameter( 'management_account_id', - ACCOUNT_ID, + MANAGEMENT_ACCOUNT_ID, ) deployment_account_parameter_store.put_parameter( 'organization_id', @@ -270,13 +273,9 @@ def worker_thread( ) # Regional base stacks can be updated after global - all_regions = list(set( - [config.deployment_account_region] - + config.target_regions - )) - for region in all_regions: - # Ensuring the kms_arn and bucket_name on the target account is - # up-to-date + for region in config.sorted_regions(): + # Ensuring the kms_arn, bucket_name, and other important properties + # are available on the target account. parameter_store = ParameterStore(region, role) parameter_store.put_parameter( 'deployment_account_id', @@ -290,6 +289,15 @@ def worker_thread( 'bucket_name', updated_kms_bucket_dict[region]['s3_regional_bucket'], ) + if region == config.deployment_account_region: + parameter_store.put_parameter( + 'management_account_id', + MANAGEMENT_ACCOUNT_ID, + ) + parameter_store.put_parameter( + 'bootstrap_templates_bucket', + S3_BUCKET_NAME, + ) # Ensuring the stage parameter on the target account is up-to-date parameter_store.put_parameter( @@ -326,9 +334,11 @@ def worker_thread( ) raise LookupError from error - except GenericAccountConfigureError as generic_account_error: - LOGGER.info(generic_account_error) - return + except Error as error: + LOGGER.exception("%s - worker thread failed: %s", account_id, error) + raise + + LOGGER.debug("%s - worker thread finished successfully", account_id) def await_sfn_executions(sfn_client): @@ -360,11 +370,20 @@ def await_sfn_executions(sfn_client): "timed out, or aborted execution. Please look into this problem " "before retrying the bootstrap pipeline. You can navigate to: " "https://%s.console.aws.amazon.com/states/home" - "?region=%s#/statemachines/view/%s", + "?region=%s#/statemachines/view/%s ", REGION_DEFAULT, REGION_DEFAULT, ACCOUNT_MANAGEMENT_STATE_MACHINE_ARN, ) + LOGGER.warning( + "Please note: If you resolved the error, but still run into this " + "warning, make sure you release a change on the pipeline (by " + "clicking the orange \"Release Change\" button. " + "The pipeline checks for failed executions of the state machine " + "that were triggered by this pipeline execution. Only a new " + "pipeline execution updates the identified that it uses to track " + "the state machine's progress.", + ) sys.exit(1) if _sfn_execution_exists_with( sfn_client, @@ -482,13 +501,7 @@ def main(): # pylint: disable=R0915 kms_and_bucket_dict = {} # First Setup/Update the Deployment Account in all regions (KMS Key and # S3 Bucket + Parameter Store values) - regions_to_enable = list( - set( - [config.deployment_account_region] - + config.target_regions - ) - ) - for region in regions_to_enable: + for region in config.sorted_regions(): cloudformation = CloudFormation( region=region, deployment_account_region=config.deployment_account_region, @@ -511,19 +524,6 @@ def main(): # pylint: disable=R0915 if region == config.deployment_account_region: cloudformation.create_iam_stack() - # Updating the stack on the management account in deployment region - cloudformation = CloudFormation( - region=config.deployment_account_region, - deployment_account_region=config.deployment_account_region, - role=boto3, - wait=True, - stack_name=None, - s3=s3, - s3_key_path='adf-build', - account_id=ACCOUNT_ID - ) - cloudformation.delete_deprecated_base_stacks() - cloudformation.create_stack() threads = [] account_ids = [ account_id["Id"] @@ -532,10 +532,10 @@ def main(): # pylint: disable=R0915 include_root=False, ) ] - non_deployment_account_ids = [ + non_deployment_account_ids = sorted([ account for account in account_ids if account != deployment_account_id - ] + ]) for account_id in non_deployment_account_ids: thread = PropagatingThread(target=worker_thread, args=( account_id, diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/adf_codepipeline.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/adf_codepipeline.py index aa2c53cbc..61a77553e 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/adf_codepipeline.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/adf_codepipeline.py @@ -68,7 +68,6 @@ def __init__(self, **kwargs): .get('properties', {}) .get("account_id") ) - self.role_arn = self._generate_role_arn() self.notification_endpoint = self.map_params.get("topic_arn") self.default_scm_branch = self.map_params.get( "default_scm_branch", @@ -81,26 +80,45 @@ def __init__(self, **kwargs): self.configuration = self._generate_configuration() self.config = self.generate() - def _generate_role_arn(self): - if self.category not in ['Build', 'Deploy']: - return None + def _get_role_account_id(self): + if self.provider == ['CodeBuild', 'CodeStarSourceConnection']: + return ADF_DEPLOYMENT_ACCOUNT_ID + + if self.category == 'Source': + return ( + self.map_params["default_providers"]["source"] + .get('properties', {}) + .get( + 'account_id', + self.default_scm_codecommit_account_id, + ) + ) + + if self.target and self.target.get('id'): + return self.target['id'] + + return None + + def _generate_role_arn(self, default_role_name=None): default_provider = ( self.map_params['default_providers'][self.category.lower()] ) - specific_role = ( - self.target + default_provider_role_name = ( + default_provider .get('properties', {}) - .get('role', default_provider.get('properties', {}).get('role')) + .get('role', default_role_name) ) - if specific_role: - account_id = ( - self.account_id - if self.provider == 'CodeBuild' - else self.target['id'] - ) + specific_role_name = ( + self.target + .get('properties', {}) + .get('role', default_provider_role_name) + ) if self.target else default_provider_role_name + + account_id = self._get_role_account_id() + if specific_role_name and account_id: return ( f'arn:{ADF_DEPLOYMENT_PARTITION}:iam::{account_id}:' - f'role/{specific_role}' + f'role/{specific_role_name}' ) return None @@ -328,9 +346,8 @@ def _generate_configuration(self): f"{input_artifact}::{path_prefix}params/{param_filename}" ), "Capabilities": "CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND", - "RoleArn": self.role_arn if self.role_arn else ( - f"arn:{ADF_DEPLOYMENT_PARTITION}:iam::{self.target['id']}:" - f"role/adf-cloudformation-deployment-role" + "RoleArn": self._generate_role_arn( + "adf-cloudformation-deployment-role", ) } contains_transform = ( @@ -494,62 +511,25 @@ def _generate_configuration(self): raise ValueError(f"{self.provider} is not a valid provider") def _generate_codepipeline_access_role(self): # pylint: disable=R0911 - account_id = ( - self.map_params['default_providers']['source'] - .get('properties', {}) - .get('account_id', '') - ) - - if self.provider == "CodeStarSourceConnection": - return None - if self.provider == "CodeBuild": + requires_no_access_role = [ + "CodeBuild", + "CodeStarSourceConnection", + "Lambda", + "Manual", + ] + if self.provider in requires_no_access_role: return None if self.provider == "CodeCommit": - return ( - f"arn:{ADF_DEPLOYMENT_PARTITION}:iam::{account_id}:" - "role/adf-codecommit-role" - ) + return self._generate_role_arn('adf-codecommit-role') if self.provider == "S3" and self.category == "Source": - return ( - f"arn:{ADF_DEPLOYMENT_PARTITION}:iam::{account_id}:" - "role/adf-codecommit-role" - ) + return self._generate_role_arn('adf-codecommit-role') if self.provider == "S3" and self.category == "Deploy": # This could be changed to use a new role that is bootstrapped, # ideally we rename adf-cloudformation-role to a # generic deployment role name - return ( - f"arn:{ADF_DEPLOYMENT_PARTITION}:iam::{self.target['id']}:" - "role/adf-cloudformation-role" - ) - if self.provider == "ServiceCatalog": - # This could be changed to use a new role that is bootstrapped, - # ideally we rename adf-cloudformation-role to a - # generic deployment role name - return ( - f"arn:{ADF_DEPLOYMENT_PARTITION}:iam::{self.target['id']}:" - "role/adf-cloudformation-role" - ) - if self.provider == "CodeDeploy": - # This could be changed to use a new role that is bootstrapped, - # ideally we rename adf-cloudformation-role to a - # generic deployment role name - return ( - f"arn:{ADF_DEPLOYMENT_PARTITION}:iam::{self.target['id']}:" - "role/adf-cloudformation-role" - ) - if self.provider == "Lambda": - # This could be changed to use a new role that is bootstrapped, - # ideally we rename adf-cloudformation-role to a - # generic deployment role name - return None - if self.provider == "CloudFormation": - return ( - f"arn:{ADF_DEPLOYMENT_PARTITION}:iam::{self.target['id']}:" - "role/adf-cloudformation-role" - ) - if self.provider == "Manual": - return None + return self._generate_role_arn('adf-cloudformation-role') + if self.provider in ["ServiceCatalog", "CodeDeploy", "CloudFormation"]: + return self._generate_role_arn('adf-cloudformation-role') raise ValueError(f'Invalid Provider {self.provider}') def generate(self): diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/adf_notifications.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/adf_notifications.py index 405295349..2c6f791db 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/adf_notifications.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/cdk_constructs/adf_notifications.py @@ -51,6 +51,11 @@ def __init__( _iam.ServicePrincipal("events.amazonaws.com"), ], resources=["*"], + conditions={ + "StringEquals": { + "aws:SourceAccount": ADF_DEPLOYMENT_ACCOUNT_ID, + }, + }, ) _topic.add_to_resource_policy(_statement) _endpoint = map_params.get("params", {}).get("notification_endpoint", "") diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/execute_pipeline_stacks.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/execute_pipeline_stacks.py index 0a3d7e5a6..d60fde278 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/execute_pipeline_stacks.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/cdk/execute_pipeline_stacks.py @@ -26,6 +26,7 @@ MANAGEMENT_ACCOUNT_ID = os.environ["MANAGEMENT_ACCOUNT_ID"] ORGANIZATION_ID = os.environ["ORGANIZATION_ID"] S3_BUCKET_NAME = os.environ["S3_BUCKET_NAME"] +KMS_KEY_ARN = os.environ["S3_BUCKET_KMS_KEY_ARN"] ADF_PIPELINE_PREFIX = os.environ["ADF_PIPELINE_PREFIX"] ADF_VERSION = os.environ["ADF_VERSION"] ADF_LOG_LEVEL = os.environ["ADF_LOG_LEVEL"] @@ -68,8 +69,9 @@ def main(): LOGGER.info('ADF Version %s', ADF_VERSION) LOGGER.info("ADF Log Level is %s", ADF_LOG_LEVEL) s3 = S3( - DEPLOYMENT_ACCOUNT_REGION, - S3_BUCKET_NAME + region=DEPLOYMENT_ACCOUNT_REGION, + bucket=S3_BUCKET_NAME, + kms_key_arn=KMS_KEY_ARN, ) threads = [] template_paths = glob.glob("cdk.out/*.template.json") diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/generate_params.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/generate_params.py index b0ac5e220..878fdfc85 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/generate_params.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/generate_params.py @@ -374,7 +374,7 @@ def _parse( except FileNotFoundError: try: with open(f"{file_path}.yml", encoding='utf-8') as file: - yaml_content = yaml.load(file, Loader=yaml.FullLoader) + yaml_content = yaml.safe_load(file) LOGGER.debug( "Read %s.yml: %s", file_path, diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/package_transform.sh b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/package_transform.sh index 320fdccbb..0fbc2f21c 100755 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/package_transform.sh +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/package_transform.sh @@ -45,7 +45,9 @@ for region in $regions; do echo "Packaging templates for region $region" ssm_bucket_name="/adf/cross_region/s3_regional_bucket/$region" bucket=$(aws ssm get-parameters --names $ssm_bucket_name --with-decryption --output=text --query='Parameters[0].Value') - sam package --s3-bucket $bucket --output-template-file $CODEBUILD_SRC_DIR/template_$region.yml --region $region + ssm_kms_arn="/adf/cross_region/kms_arn/$region" + kms_arn=$(aws ssm get-parameters --names $ssm_kms_arn --with-decryption --output=text --query='Parameters[0].Value') + sam package --s3-bucket $bucket --kms-key-id $kms_arn --output-template-file $CODEBUILD_SRC_DIR/template_$region.yml --region $region else # If package is not needed, just copy the file for each region echo "Copying template for region $region" diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/retrieve_organization_accounts.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/retrieve_organization_accounts.py index 4c9db8780..048d60327 100755 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/retrieve_organization_accounts.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/retrieve_organization_accounts.py @@ -51,7 +51,8 @@ -r , --role-name The name of the role to assume into to get read access to list and describe the member accounts in the - organization [default: OrganizationAccountAccessRole-readonly]. + organization [default: + adf/organizations/adf-organizations-readonly]. -s , --session-name The session name to use when assuming into the billing account diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/sts.sh b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/sts.sh index 15431fa9c..25b9887b0 100755 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/sts.sh +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/sts.sh @@ -7,7 +7,7 @@ if [ -z "$AWS_PARTITION" ]; then AWS_PARTITION="aws" fi -# Example usage sts 123456789012 adf-terraform-deployment-role +# Example usage sts 123456789012 adf-pipeline-terraform-deployment export ROLE=arn:$AWS_PARTITION:iam::$1:role/$2 temp_role=$(aws sts assume-role --role-arn $ROLE --role-session-name $2-$ADF_PROJECT_NAME) export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq -r .Credentials.AccessKeyId) diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/terraform/adf_terraform.sh b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/terraform/adf_terraform.sh index d0cc920eb..111731c84 100755 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/terraform/adf_terraform.sh +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/terraform/adf_terraform.sh @@ -12,6 +12,7 @@ echo "Terraform stage: $TF_STAGE" tfinit() { # retrieve regional S3 bucket name from parameter store S3_BUCKET_REGION_NAME=$(aws ssm get-parameter --name "/adf/cross_region/s3_regional_bucket/$AWS_REGION" --region "$AWS_DEFAULT_REGION" | jq .Parameter.Value | sed s/\"//g) + KMS_KEY_ARN=$(aws ssm get-parameter --name "/adf/cross_region/kms_arn/$AWS_REGION" --region "$AWS_DEFAULT_REGION" | jq .Parameter.Value | sed s/\"//g) mkdir -p "${CURRENT}/tmp/${TF_VAR_TARGET_ACCOUNT_ID}-${AWS_REGION}" cd "${CURRENT}/tmp/${TF_VAR_TARGET_ACCOUNT_ID}-${AWS_REGION}" || exit cp -R "${CURRENT}"/tf/. "${CURRENT}/tmp/${TF_VAR_TARGET_ACCOUNT_ID}-${AWS_REGION}" @@ -27,11 +28,13 @@ tfinit() { fi terraform init \ -backend-config "bucket=$S3_BUCKET_REGION_NAME" \ + -backend-config "kms_key_id=$KMS_KEY_ARN" \ -backend-config "region=$AWS_REGION" \ -backend-config "key=$ADF_PROJECT_NAME/$ACCOUNT_ID.tfstate" \ -backend-config "dynamodb_table=adf-tflocktable" echo "Bucket: $S3_BUCKET_REGION_NAME" + echo "KMS Key ARN: $KMS_KEY_ARN" echo "Region: $AWS_REGION" echo "Key: $ADF_PROJECT_NAME/$ACCOUNT_ID.tfstate" echo "DynamoDB table: adf-tflocktable" @@ -44,7 +47,10 @@ tfplan() { terraform plan -out "${ADF_PROJECT_NAME}-${TF_VAR_TARGET_ACCOUNT_ID}" 2>&1 | tee -a "${ADF_PROJECT_NAME}-${TF_VAR_TARGET_ACCOUNT_ID}-${TS}.log" set +o pipefail # Save Terraform plan results to the S3 bucket - aws s3 cp "${ADF_PROJECT_NAME}-${TF_VAR_TARGET_ACCOUNT_ID}-${TS}.log" "s3://${S3_BUCKET_REGION_NAME}/${ADF_PROJECT_NAME}/tf-plan/${DATE}/${TF_VAR_TARGET_ACCOUNT_ID}/${ADF_PROJECT_NAME}-${TF_VAR_TARGET_ACCOUNT_ID}-${TS}.log" + aws s3 cp \ + "${ADF_PROJECT_NAME}-${TF_VAR_TARGET_ACCOUNT_ID}-${TS}.log" \ + "s3://${S3_BUCKET_REGION_NAME}/${ADF_PROJECT_NAME}/tf-plan/${DATE}/${TF_VAR_TARGET_ACCOUNT_ID}/${ADF_PROJECT_NAME}-${TF_VAR_TARGET_ACCOUNT_ID}-${TS}.log" \ + --sse-kms-key-id $KMS_KEY_ARN echo "Path to terraform plan s3://$S3_BUCKET_REGION_NAME/$ADF_PROJECT_NAME/tf-plan/$DATE/$TF_VAR_TARGET_ACCOUNT_ID/$ADF_PROJECT_NAME-$TF_VAR_TARGET_ACCOUNT_ID-$TS.log" } tfapply() { diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/terraform/get_accounts.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/terraform/get_accounts.py index 369253aca..9f3ca3386 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/terraform/get_accounts.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/helpers/terraform/get_accounts.py @@ -24,8 +24,7 @@ PARTITION = get_partition(REGION_DEFAULT) sts = boto3.client('sts') ssm = boto3.client('ssm') -response = ssm.get_parameter(Name='/adf/cross_account_access_role') -CROSS_ACCOUNT_ACCESS_ROLE = response['Parameter']['Value'] +ORGANIZATIONS_READONLY_ROLE = "adf/organizations/adf-organizations-readonly" def main(): @@ -43,8 +42,8 @@ def list_organizational_units_for_parent(parent_ou): organizations = get_boto3_client( 'organizations', ( - f'arn:{PARTITION}:sts::{MANAGEMENT_ACCOUNT_ID}:role/' - f'{CROSS_ACCOUNT_ACCESS_ROLE}-readonly' + f'arn:{PARTITION}:sts::{MANAGEMENT_ACCOUNT_ID}:' + f'role/{ORGANIZATIONS_READONLY_ROLE}' ), 'getOrganizationUnits', ) @@ -71,8 +70,8 @@ def get_accounts(): organizations = get_boto3_client( 'organizations', ( - f'arn:{PARTITION}:sts::{MANAGEMENT_ACCOUNT_ID}:role/' - f'{CROSS_ACCOUNT_ACCESS_ROLE}-readonly' + f'arn:{PARTITION}:sts::{MANAGEMENT_ACCOUNT_ID}:' + f'role/{ORGANIZATIONS_READONLY_ROLE}' ), 'getaccountIDs', ) @@ -96,8 +95,8 @@ def get_accounts_from_ous(): organizations = get_boto3_client( 'organizations', ( - f'arn:{PARTITION}:sts::{MANAGEMENT_ACCOUNT_ID}:role/' - f'{CROSS_ACCOUNT_ACCESS_ROLE}-readonly' + f'arn:{PARTITION}:sts::{MANAGEMENT_ACCOUNT_ID}:' + f'role/{ORGANIZATIONS_READONLY_ROLE}' ), 'getRootAccountIDs', ) diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/cloudformation.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/cloudformation.py index e045c66fa..4111ebb99 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/cloudformation.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/cloudformation.py @@ -29,7 +29,6 @@ CFN_UNACCEPTED_CHARS = re.compile(r"[^-a-zA-Z0-9]") ADF_GLOBAL_IAM_STACK_NAME = 'adf-global-base-iam' ADF_GLOBAL_BOOTSTRAP_STACK_NAME = 'adf-global-base-bootstrap' -ADF_GLOBAL_ADF_BUILD_STACK_NAME = 'adf-global-base-adf-build' class StackProperties: @@ -148,7 +147,6 @@ def _get_valid_stack_names(self): if self.region == self.deployment_account_region: valid_stack_names.append(ADF_GLOBAL_IAM_STACK_NAME) valid_stack_names.append(ADF_GLOBAL_BOOTSTRAP_STACK_NAME) - valid_stack_names.append(ADF_GLOBAL_ADF_BUILD_STACK_NAME) return valid_stack_names @@ -405,9 +403,9 @@ def _create_change_set(self): raise GenericAccountConfigureError(error) from error except WaiterError as error: err = error.last_response - if CloudFormation._change_set_failed_due_to_empty( - err["Status"], - err["StatusReason"], + if err and CloudFormation._change_set_failed_due_to_empty( + err.get("Status", ""), + err.get("StatusReason", ""), ): LOGGER.debug( "%s in %s - CloudFormation ChangeSet %s does not contain " @@ -707,21 +705,7 @@ def get_stack_output(self, value): return None # Return None if describe stack call fails def get_stack_status(self): - try: - stack = self.client.describe_stacks( - StackName=self.stack_name - ) - return stack['Stacks'][0]['StackStatus'] - except BaseException as error: - LOGGER.debug( - "%s in %s - Attempted to get stack status from %s but it " - "failed with: %s", - self.account_id, - self.region, - self.stack_name, - error, - ) - return None # Return None if the stack does not exist + return self._get_stack_status(self.stack_name) def delete_stack(self, stack_name, wait_override=False): try: diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/deployment_map.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/deployment_map.py index 72c147f04..db3dbacde 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/deployment_map.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/deployment_map.py @@ -72,7 +72,7 @@ def _read(self, file_path=None): try: LOGGER.info('Loading deployment_map file %s', file_path) with open(file_path, mode='r', encoding='utf-8') as stream: - _input = yaml.load(stream, Loader=yaml.FullLoader) + _input = yaml.safe_load(stream) return SchemaValidation(_input).validated except FileNotFoundError: LOGGER.warning('No default map file found at %s, continuing', file_path) diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/parameter_store.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/parameter_store.py index e3fee9fc3..7397a06a2 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/parameter_store.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/parameter_store.py @@ -111,6 +111,25 @@ def fetch_parameter(self, name, with_decryption=False, adf_only=True): ) return response['Parameter']['Value'] except self.client.exceptions.ParameterNotFound as error: + LOGGER.debug('Parameter %s not found', param_name) raise ParameterNotFoundError( f'Parameter {param_name} Not Found', ) from error + + def fetch_parameter_accept_not_found( + self, + name, + with_decryption=False, + adf_only=True, + default_value=None, + ): + """ + Performs the fetch_parameter action, while catching the + ParameterNotFoundError and returning the configured default_value + instead if this happens. + """ + try: + return self.fetch_parameter(name, with_decryption, adf_only) + except ParameterNotFoundError: + LOGGER.debug('Using default instead: %s', default_value) + return default_value diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/repo.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/repo.py index f2ce6cbec..d40384f14 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/repo.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/repo.py @@ -36,7 +36,10 @@ def __init__(self, account_id, name, description=''): self.account_id = account_id self.partition = get_partition(DEPLOYMENT_ACCOUNT_REGION) self.session = sts.assume_cross_account_role( - f'arn:{self.partition}:iam::{account_id}:role/adf-automation-role', + ( + f'arn:{self.partition}:iam::{account_id}:' + 'role/adf-automation-role' + ), f'create_repo_{account_id}' ) @@ -70,9 +73,9 @@ def define_repo_parameters(self): }] def create_update(self): - s3_object_path = s3.put_object( - "adf-build/templates/codecommit.yml", - "templates/codecommit.yml" + s3_object_path = s3.build_pathing_style( + style="path", + key="adf-build/templates/codecommit.yml", ) cloudformation = CloudFormation( region=DEPLOYMENT_ACCOUNT_REGION, diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/rule.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/rule.py index 92f616f02..0a7c01b6d 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/rule.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/rule.py @@ -16,7 +16,6 @@ from sts import STS LOGGER = configure_logger(__name__) -TARGET_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '..')) DEPLOYMENT_ACCOUNT_ID = os.environ["ACCOUNT_ID"] DEPLOYMENT_ACCOUNT_REGION = os.environ["AWS_REGION"] SOURCE_ACCOUNT_REGION = os.environ["AWS_REGION"] @@ -35,14 +34,17 @@ def __init__(self, source_account_id): self.partition = get_partition(DEPLOYMENT_ACCOUNT_REGION) # Requirement adf-automation-role to exist on target self.role = sts.assume_cross_account_role( - f'arn:{self.partition}:iam::{source_account_id}:role/adf-automation-role', + ( + f'arn:{self.partition}:iam::{source_account_id}:' + 'role/adf-automation-role' + ), f'create_rule_{source_account_id}' ) def create_update(self): - s3_object_path = s3.put_object( - "adf-build/templates/events.yml", - f"{TARGET_DIR}/templates/events.yml" + s3_object_path = s3.build_pathing_style( + style="path", + key="adf-build/templates/events.yml", ) cloudformation = CloudFormation( region=SOURCE_ACCOUNT_REGION, diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/s3.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/s3.py index 805501a3a..2ebd6fbfd 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/s3.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/s3.py @@ -18,11 +18,12 @@ class S3: Class used for modeling S3 """ - def __init__(self, region, bucket): + def __init__(self, region, bucket, kms_key_arn=None): self.region = region self.client = boto3.client('s3', region_name=region) self.resource = boto3.resource('s3', region_name=region) self.bucket = bucket + self.kms_key_arn = kms_key_arn @staticmethod def supported_path_styles(): @@ -159,10 +160,14 @@ def _perform_put_object(self, key, file_path, object_acl="private"): self.region, ) with open(file_path, mode='rb') as file_handler: - self.resource.Object(self.bucket, key).put( - ACL=object_acl, - Body=file_handler, - ) + props = { + "ACL": object_acl, + "Body": file_handler, + } + if self.kms_key_arn: + props['ServerSideEncryption'] = 'aws:kms' + props['SSEKMSKeyId'] = self.kms_key_arn + self.resource.Object(self.bucket, key).put(**props) LOGGER.debug("Upload of %s was successful.", key) except BaseException: LOGGER.error("Failed to upload %s", key, exc_info=True) diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/stepfunctions.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/stepfunctions.py index 45d23f1ca..1356b4c98 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/stepfunctions.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/stepfunctions.py @@ -63,7 +63,7 @@ def _start_statemachine(self): stateMachineArn=( f"arn:{partition}:states:{self.deployment_account_region}:" f"{self.deployment_account_id}:stateMachine:" - "EnableCrossAccountAccess" + "adf-bootstrap-enable-cross-account" ), input=json.dumps({ "deployment_account_region": self.deployment_account_region, @@ -112,7 +112,7 @@ def _wait_state_machine_execution(self): if self.execution_status in ('FAILED', 'ABORTED', 'TIMED_OUT'): raise AssertionError( - "State Machine on Deployment account" + "State Machine on Deployment account " f"{self.deployment_account_id} has " f"status: {self.execution_status}, see logs" ) diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/sts.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/sts.py index 03ce5f4f8..ade132668 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/sts.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/sts.py @@ -5,28 +5,123 @@ """ import boto3 +import botocore from logger import configure_logger LOGGER = configure_logger(__name__) +ACCESS_DENIED_ERROR_CODE = "AccessDenied" +ADF_JUMP_ROLE_NAME = ( + "adf/account-bootstrapping/jump/" + "adf-bootstrapping-cross-account-jump-role" +) +ADF_BOOTSTRAP_UPDATE_DEPLOYMENT_ROLE_NAME = ( + "adf/bootstrap/" + "adf-bootstrap-update-deployment-role" +) class STS: """Class used for modeling STS """ - def __init__(self): - self.client = boto3.client('sts') + def __init__(self, client=None): + self.client = client or boto3.client('sts') def assume_cross_account_role(self, role_arn, role_session_name): """Assumes a role in another account and returns the temporary credentials """ + LOGGER.debug( + "Assuming into %s with session name: %s", + role_arn, + role_session_name, + ) sts_response = self.client.assume_role( RoleArn=role_arn, RoleSessionName=role_session_name ) + LOGGER.info( + "Assumed into %s with session name: %s", + role_arn, + role_session_name, + ) return boto3.Session( aws_access_key_id=sts_response['Credentials']['AccessKeyId'], aws_secret_access_key=sts_response['Credentials']['SecretAccessKey'], aws_session_token=sts_response['Credentials']['SessionToken'], ) + + @staticmethod + def _build_role_arn( + partition, + account_id, + role_name, + ): + return f"arn:{partition}:iam::{account_id}:role/{role_name}" + + def assume_bootstrap_deployment_role( + self, + partition, + management_account_id, + account_id, + privileged_role_name, + role_session_name, + ): + """ + Assuming into the JumpRole first, while using the role credentials + it will attempt to assume into the privileged access role first. + + If access to the privileged cross-account access role is denied, + the Access Denied error is caught. In this case, it will attempt to + assume into the ADF Bootstrap Update Deployment role instead. + + The privileged cross-account access role is only granted access to if + the account is not bootstrapped by ADF yet. Or when ADF is configured + with a GrantOrgWidePrivilegedBootstrapAccessUntil date/time that is in + the future. + """ + LOGGER.info( + "Using ADF Account-Bootstrapping Jump Role to assume " + "into account %s", + account_id, + ) + jump_role_session = self.assume_cross_account_role( + STS._build_role_arn( + partition, + management_account_id, + ADF_JUMP_ROLE_NAME, + ), + role_session_name, + ) + + jump_role_sts = STS(jump_role_session.client('sts')) + try: + session = jump_role_sts.assume_cross_account_role( + STS._build_role_arn( + partition, + account_id, + privileged_role_name, + ), + role_session_name, + ) + LOGGER.warning( + "Using the privileged cross-account access role: %s, " + "as access to this role was granted for account %s", + privileged_role_name, + account_id, + ) + return session + except botocore.exceptions.ClientError as error: + if error.response["Error"]["Code"] == ACCESS_DENIED_ERROR_CODE: + # The access denied error most likely implies that the + # account is already bootstrapped by ADF. Hence the ADF + # Bootstrap Update Deployment role should be used instead. + return jump_role_sts.assume_cross_account_role( + STS._build_role_arn( + partition, + account_id, + ADF_BOOTSTRAP_UPDATE_DEPLOYMENT_ROLE_NAME, + ), + role_session_name, + ) + raise diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/stubs/stub_cloudformation.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/stubs/stub_cloudformation.py index 68f849a38..4965e4de5 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/stubs/stub_cloudformation.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/stubs/stub_cloudformation.py @@ -68,9 +68,7 @@ 'ParentId': 'Unique-Stack-Id', }, { - # Should be filtered out when deleting deprecated base stacks - # This is current, but should only exist in the global management - # account. + # Should be deprecated when deleting deprecated base stacks 'StackName': 'adf-global-base-adf-build', 'StackStatus': 'CREATE_COMPLETE', }, diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_cloudformation.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_cloudformation.py index fd1203aa7..4c99a721a 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_cloudformation.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_cloudformation.py @@ -62,21 +62,6 @@ def test_global_get_stack_name(global_cls): assert global_cls.stack_name == 'adf-global-base-bootstrap' -def test_global_build_get_stack_name(): - cfn = CloudFormation( - region='us-east-1', - deployment_account_region='us-east-1', - role=boto3, - wait=False, - stack_name=None, - template_url='https://some/path/global.yml', - s3=None, - s3_key_path='adf-build', - account_id=123 - ) - assert cfn.stack_name == 'adf-global-base-adf-build' - - def test_global_deployment_get_stack_name(): cfn = CloudFormation( region='us-east-1', @@ -323,72 +308,13 @@ def test_delete_deprecated_base_stacks_some_deletions(paginator_mock, logger, gl call(StackName='adf-global-base-deployment'), # ^ We are not in the deployment OU with this CloudFormation instance call(StackName='adf-global-base-deployment-SomeOtherStack'), + call(StackName='adf-global-base-adf-build'), call(StackName='adf-global-base-dev'), call(StackName='adf-global-base-test'), call(StackName='adf-global-base-acceptance'), call(StackName='adf-global-base-prod'), ]) - assert global_cls.client.delete_stack.call_count == 8 - logger.warning.assert_has_calls([ - call('Removing stack: %s', 'adf-global-base-iam'), - # ^ As we delete a bootstrap stack we need to recreate the IAM stack, - # hence deleting it. - call('Removing stack: %s', 'adf-regional-base-bootstrap'), - # ^ We are deploying in a global region, not regional - call('Removing stack: %s', 'adf-global-base-deployment'), - # ^ We are not in the deployment OU with this CloudFormation instance - call('Removing stack: %s', 'adf-global-base-deployment-SomeOtherStack'), - call('Removing stack: %s', 'adf-global-base-dev'), - call('Removing stack: %s', 'adf-global-base-test'), - call('Removing stack: %s', 'adf-global-base-acceptance'), - call('Removing stack: %s', 'adf-global-base-prod'), - call( - 'Please remove stack %s manually, state %s implies that it ' - 'cannot be deleted automatically', - 'adf-global-base-some-ou', - 'CREATE_IN_PROGRESS', - ), - ]) - - -@patch('cloudformation.LOGGER') -@patch("cloudformation.paginator") -def test_delete_deprecated_base_stacks_management_account_adf_build(paginator_mock, logger): - global_cls = CloudFormation( - region='us-east-1', - deployment_account_region='us-east-1', - role=boto3, - wait=False, - stack_name=None, - template_url='https://some/path/global.yml', - s3=None, - s3_key_path='adf-build', - account_id=123 - ) - global_cls.client = Mock() - paginator_mock.return_value = stub_cloudformation.list_stacks.get('StackSummaries') - global_cls.client.describe_stacks.return_value = { - "Stacks": [ - { - 'StackName': 'adf-global-base-iam', - 'StackStatus': 'CREATE_COMPLETE', - }, - ], - } - global_cls.delete_deprecated_base_stacks() - global_cls.client.delete_stack.assert_has_calls([ - call(StackName='adf-global-base-iam'), - call(StackName='adf-regional-base-bootstrap'), - # ^ We are deploying in a global region, not regional - call(StackName='adf-global-base-deployment'), - # ^ We are not in the deployment OU with this CloudFormation instance - call(StackName='adf-global-base-deployment-SomeOtherStack'), - call(StackName='adf-global-base-dev'), - call(StackName='adf-global-base-test'), - call(StackName='adf-global-base-acceptance'), - call(StackName='adf-global-base-prod'), - ]) - assert global_cls.client.delete_stack.call_count == 8 + assert global_cls.client.delete_stack.call_count == 9 logger.warning.assert_has_calls([ call('Removing stack: %s', 'adf-global-base-iam'), # ^ As we delete a bootstrap stack we need to recreate the IAM stack, @@ -398,6 +324,7 @@ def test_delete_deprecated_base_stacks_management_account_adf_build(paginator_mo call('Removing stack: %s', 'adf-global-base-deployment'), # ^ We are not in the deployment OU with this CloudFormation instance call('Removing stack: %s', 'adf-global-base-deployment-SomeOtherStack'), + call('Removing stack: %s', 'adf-global-base-adf-build'), call('Removing stack: %s', 'adf-global-base-dev'), call('Removing stack: %s', 'adf-global-base-test'), call('Removing stack: %s', 'adf-global-base-acceptance'), @@ -429,18 +356,20 @@ def test_delete_deprecated_base_stacks_no_iam(paginator_mock, logger, global_cls call(StackName='adf-global-base-deployment'), # ^ We are not in the deployment OU with this CloudFormation instance call(StackName='adf-global-base-deployment-SomeOtherStack'), + call(StackName='adf-global-base-adf-build'), call(StackName='adf-global-base-dev'), call(StackName='adf-global-base-test'), call(StackName='adf-global-base-acceptance'), call(StackName='adf-global-base-prod'), ]) - assert global_cls.client.delete_stack.call_count == 7 + assert global_cls.client.delete_stack.call_count == 8 logger.warning.assert_has_calls([ call('Removing stack: %s', 'adf-regional-base-bootstrap'), # ^ We are deploying in a global region, not regional call('Removing stack: %s', 'adf-global-base-deployment'), # ^ We are not in the deployment OU with this CloudFormation instance call('Removing stack: %s', 'adf-global-base-deployment-SomeOtherStack'), + call('Removing stack: %s', 'adf-global-base-adf-build'), call('Removing stack: %s', 'adf-global-base-dev'), call('Removing stack: %s', 'adf-global-base-test'), call('Removing stack: %s', 'adf-global-base-acceptance'), diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_partition.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_partition.py index f2a42cf80..34af6b5e3 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_partition.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_partition.py @@ -1,11 +1,7 @@ # Copyright Amazon.com Inc. or its affiliates. # SPDX-License-Identifier: MIT-0 -"""Tests for partition.py - -Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. -SPDX-License-Identifier: MIT-0 -""" +"""Tests for partition.py""" import pytest diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_sts.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_sts.py new file mode 100644 index 000000000..a782115ae --- /dev/null +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/python/tests/test_sts.py @@ -0,0 +1,534 @@ +# Copyright Amazon.com Inc. or its affiliates. +# SPDX-License-Identifier: MIT-0 + +"""Tests for sts.py""" + +# pylint: skip-file + +import pytest +import boto3 +from botocore.exceptions import ClientError + +from unittest.mock import Mock, patch, call +from sts import ( + ADF_JUMP_ROLE_NAME, + ADF_BOOTSTRAP_UPDATE_DEPLOYMENT_ROLE_NAME, + STS, +) + + +def build_mocked_sts_client_success(identifier=""): + sts_client = Mock() + sts_client.assume_role.return_value = build_success_assume_role_response( + identifier, + ) + return sts_client + + +def build_success_assume_role_response(identifier): + return { + "Credentials": { + "AccessKeyId": f"ak{identifier}", + "SecretAccessKey": f"sak{identifier}", + "SessionToken": f"st{identifier}", + }, + } + + +@pytest.fixture +def sts_client(): + return Mock() + + +@patch("sts.LOGGER") +def test_assume_cross_account_role(logger): + sts_client = build_mocked_sts_client_success() + sts = STS(sts_client) + role_arn = "arn:aws:iam::123456789012:role/test-role" + role_session_name = "test-session" + + session = sts.assume_cross_account_role(role_arn, role_session_name) + + assert isinstance(session, boto3.Session) + assert session.get_credentials().access_key == "ak" + assert session.get_credentials().secret_key == "sak" + assert session.get_credentials().token == "st" + + logger.debug.assert_called_once_with( + "Assuming into %s with session name: %s", + role_arn, + role_session_name, + ) + logger.info.assert_called_once_with( + "Assumed into %s with session name: %s", + role_arn, + role_session_name, + ) + + sts_client.assume_role.assert_called_once_with( + RoleArn=role_arn, + RoleSessionName=role_session_name, + ) +# --------------------------------------------------------- + + +def test_build_role_arn(): + role_arn = STS._build_role_arn( + partition="aws", + account_id="123456789012", + role_name="test-role", + ) + assert role_arn == "arn:aws:iam::123456789012:role/test-role" +# --------------------------------------------------------- + + +@patch("sts.boto3") +@patch("sts.LOGGER") +def test_assume_bootstrap_deployment_role_privileged_allowed(logger, boto_mock): + root_sts_client = build_mocked_sts_client_success('-jump') + jump_session_mock = Mock() + deploy_session_mock = Mock() + boto_mock.Session.side_effect = [ + jump_session_mock, + deploy_session_mock, + ] + + jump_session_sts_client = build_mocked_sts_client_success('-privileged') + jump_session_mock.client.return_value = jump_session_sts_client + + sts = STS(root_sts_client) + partition = "aws" + management_account_id = '999999999999' + account_id = "123456789012" + privileged_role_name = "test-privileged-role" + role_session_name = "test-session" + + session = sts.assume_bootstrap_deployment_role( + partition, + management_account_id, + account_id, + privileged_role_name, + role_session_name, + ) + + assert session == deploy_session_mock + + boto_mock.Session.assert_has_calls([ + call( + aws_access_key_id="ak-jump", + aws_secret_access_key="sak-jump", + aws_session_token="st-jump", + ), + call( + aws_access_key_id="ak-privileged", + aws_secret_access_key="sak-privileged", + aws_session_token="st-privileged", + ), + ]) + assert boto_mock.Session.call_count == 2 + + jump_role_arn = STS._build_role_arn( + partition, + management_account_id, + ADF_JUMP_ROLE_NAME, + ) + privileged_role_arn = STS._build_role_arn( + partition, + account_id, + privileged_role_name, + ) + logger.debug.assert_has_calls([ + call( + "Assuming into %s with session name: %s", + jump_role_arn, + role_session_name, + ), + call( + "Assuming into %s with session name: %s", + privileged_role_arn, + role_session_name, + ), + ]) + assert logger.debug.call_count == 2 + logger.info.assert_has_calls([ + call( + "Using ADF Account-Bootstrapping Jump Role to assume into " + "account %s", + account_id, + ), + call( + "Assumed into %s with session name: %s", + jump_role_arn, + role_session_name, + ), + call( + "Assumed into %s with session name: %s", + privileged_role_arn, + role_session_name, + ), + ]) + assert logger.info.call_count == 3 + logger.warning.assert_called_once_with( + "Using the privileged cross-account access role: %s, " + "as access to this role was granted for account %s", + privileged_role_name, + account_id, + ) + + root_sts_client.assume_role.assert_called_once_with( + RoleArn=jump_role_arn, + RoleSessionName=role_session_name, + ) + + jump_session_sts_client.assume_role.assert_called_once_with( + RoleArn=privileged_role_arn, + RoleSessionName=role_session_name, + ) + + +@patch("sts.boto3") +@patch("sts.LOGGER") +def test_assume_bootstrap_deployment_other_error(logger, boto_mock): + root_sts_client = build_mocked_sts_client_success('-jump') + jump_session_mock = Mock() + deploy_session_mock = Mock() + boto_mock.Session.side_effect = [ + jump_session_mock, + deploy_session_mock, + ] + + jump_session_sts_client = Mock() + # Throw an Unknown error when it tried to access the privileged + # cross-account access role. + error = ClientError( + error_response={'Error': {'Code': 'Unknown'}}, + operation_name='AssumeRole' + ) + jump_session_sts_client.assume_role.side_effect = error + jump_session_mock.client.return_value = jump_session_sts_client + + sts = STS(root_sts_client) + partition = "aws" + management_account_id = '999999999999' + account_id = "123456789012" + privileged_role_name = "test-privileged-role" + role_session_name = "test-session" + + with pytest.raises(ClientError): + sts.assume_bootstrap_deployment_role( + partition, + management_account_id, + account_id, + privileged_role_name, + role_session_name, + ) + + boto_mock.Session.assert_has_calls([ + call( + aws_access_key_id="ak-jump", + aws_secret_access_key="sak-jump", + aws_session_token="st-jump", + ), + ]) + assert boto_mock.Session.call_count == 1 + + jump_role_arn = STS._build_role_arn( + partition, + management_account_id, + ADF_JUMP_ROLE_NAME, + ) + privileged_role_arn = STS._build_role_arn( + partition, + account_id, + privileged_role_name, + ) + logger.debug.assert_has_calls([ + call( + "Assuming into %s with session name: %s", + jump_role_arn, + role_session_name, + ), + call( + "Assuming into %s with session name: %s", + privileged_role_arn, + role_session_name, + ), + ]) + assert logger.debug.call_count == 2 + logger.info.assert_has_calls([ + call( + "Using ADF Account-Bootstrapping Jump Role to assume into " + "account %s", + account_id, + ), + call( + "Assumed into %s with session name: %s", + jump_role_arn, + role_session_name, + ), + ]) + assert logger.info.call_count == 2 + logger.warning.assert_not_called() + + root_sts_client.assume_role.assert_called_once_with( + RoleArn=jump_role_arn, + RoleSessionName=role_session_name, + ) + + jump_session_sts_client.assume_role.assert_called_once_with( + RoleArn=privileged_role_arn, + RoleSessionName=role_session_name, + ) + + +@patch("sts.boto3") +@patch("sts.LOGGER") +def test_assume_bootstrap_deployment_role_privileged_access_denied( + logger, + boto_mock, +): + root_sts_client = build_mocked_sts_client_success('-jump') + jump_session_mock = Mock() + deploy_session_mock = Mock() + boto_mock.Session.side_effect = [ + jump_session_mock, + deploy_session_mock, + ] + + jump_session_sts_client = Mock() + jump_session_sts_client.assume_role.side_effect = [ + # Throw an Access Denied error when it tried to access the + # privileged cross-account access role. + ClientError( + error_response={'Error': {'Code': 'AccessDenied'}}, + operation_name='AssumeRole' + ), + # Accept the request for the ADF Bootstrap Update Deployment Role. + build_success_assume_role_response( + '-deploy', + ), + ] + jump_session_mock.client.return_value = jump_session_sts_client + + sts = STS(root_sts_client) + partition = "aws" + management_account_id = '999999999999' + account_id = "123456789012" + privileged_role_name = "test-privileged-role" + role_session_name = "test-session" + + session = sts.assume_bootstrap_deployment_role( + partition, + management_account_id, + account_id, + privileged_role_name, + role_session_name, + ) + + assert session == deploy_session_mock + + boto_mock.Session.assert_has_calls([ + call( + aws_access_key_id="ak-jump", + aws_secret_access_key="sak-jump", + aws_session_token="st-jump", + ), + call( + aws_access_key_id="ak-deploy", + aws_secret_access_key="sak-deploy", + aws_session_token="st-deploy", + ), + ]) + assert boto_mock.Session.call_count == 2 + + jump_role_arn = STS._build_role_arn( + partition, + management_account_id, + ADF_JUMP_ROLE_NAME, + ) + privileged_role_arn = STS._build_role_arn( + partition, + account_id, + privileged_role_name, + ) + deploy_role_arn = STS._build_role_arn( + partition, + account_id, + ADF_BOOTSTRAP_UPDATE_DEPLOYMENT_ROLE_NAME, + ) + logger.debug.assert_has_calls([ + call( + "Assuming into %s with session name: %s", + jump_role_arn, + role_session_name, + ), + call( + "Assuming into %s with session name: %s", + privileged_role_arn, + role_session_name, + ), + call( + "Assuming into %s with session name: %s", + deploy_role_arn, + role_session_name, + ), + ]) + assert logger.debug.call_count == 3 + logger.info.assert_has_calls([ + call( + "Using ADF Account-Bootstrapping Jump Role to assume into " + "account %s", + account_id, + ), + call( + "Assumed into %s with session name: %s", + jump_role_arn, + role_session_name, + ), + call( + "Assumed into %s with session name: %s", + deploy_role_arn, + role_session_name, + ), + ]) + assert logger.info.call_count == 3 + logger.warning.assert_not_called() + + root_sts_client.assume_role.assert_called_once_with( + RoleArn=jump_role_arn, + RoleSessionName=role_session_name, + ) + + jump_session_sts_client.assume_role.assert_has_calls([ + call( + RoleArn=privileged_role_arn, + RoleSessionName=role_session_name, + ), + call( + RoleArn=deploy_role_arn, + RoleSessionName=role_session_name, + ), + ]) + assert jump_session_sts_client.assume_role.call_count == 2 + + +@patch("sts.boto3") +@patch("sts.LOGGER") +def test_assume_bootstrap_deployment_role_deployment_access_denied_too( + logger, + boto_mock, +): + root_sts_client = build_mocked_sts_client_success('-jump') + jump_session_mock = Mock() + deploy_session_mock = Mock() + boto_mock.Session.side_effect = [ + jump_session_mock, + deploy_session_mock, + ] + + jump_session_sts_client = Mock() + jump_session_sts_client.assume_role.side_effect = [ + # Throw an Access Denied error when it tried to access the + # privileged cross-account access role. + ClientError( + error_response={'Error': {'Code': 'AccessDenied'}}, + operation_name='AssumeRole' + ), + # Throw an Access Denied error when it tried to access the + # ADF Bootstrap Update Deployment Role + ClientError( + error_response={'Error': {'Code': 'AccessDenied'}}, + operation_name='AssumeRole' + ), + ] + jump_session_mock.client.return_value = jump_session_sts_client + + sts = STS(root_sts_client) + partition = "aws" + management_account_id = '999999999999' + account_id = "123456789012" + privileged_role_name = "test-privileged-role" + role_session_name = "test-session" + + with pytest.raises(ClientError): + sts.assume_bootstrap_deployment_role( + partition, + management_account_id, + account_id, + privileged_role_name, + role_session_name, + ) + + boto_mock.Session.assert_has_calls([ + call( + aws_access_key_id="ak-jump", + aws_secret_access_key="sak-jump", + aws_session_token="st-jump", + ), + ]) + assert boto_mock.Session.call_count == 1 + + jump_role_arn = STS._build_role_arn( + partition, + management_account_id, + ADF_JUMP_ROLE_NAME, + ) + privileged_role_arn = STS._build_role_arn( + partition, + account_id, + privileged_role_name, + ) + deploy_role_arn = STS._build_role_arn( + partition, + account_id, + ADF_BOOTSTRAP_UPDATE_DEPLOYMENT_ROLE_NAME, + ) + logger.debug.assert_has_calls([ + call( + "Assuming into %s with session name: %s", + jump_role_arn, + role_session_name, + ), + call( + "Assuming into %s with session name: %s", + privileged_role_arn, + role_session_name, + ), + call( + "Assuming into %s with session name: %s", + deploy_role_arn, + role_session_name, + ), + ]) + assert logger.debug.call_count == 3 + logger.info.assert_has_calls([ + call( + "Using ADF Account-Bootstrapping Jump Role to assume into " + "account %s", + account_id, + ), + call( + "Assumed into %s with session name: %s", + jump_role_arn, + role_session_name, + ), + ]) + assert logger.info.call_count == 2 + logger.warning.assert_not_called() + + root_sts_client.assume_role.assert_called_once_with( + RoleArn=jump_role_arn, + RoleSessionName=role_session_name, + ) + + jump_session_sts_client.assume_role.assert_has_calls([ + call( + RoleArn=privileged_role_arn, + RoleSessionName=role_session_name, + ), + call( + RoleArn=deploy_role_arn, + RoleSessionName=role_session_name, + ), + ]) + assert jump_session_sts_client.assume_role.call_count == 2 diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/resolver_upload.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/resolver_upload.py index 114c57754..1b8cd78f2 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/resolver_upload.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/resolver_upload.py @@ -70,12 +70,15 @@ def resolve(self, lookup_str: str, random_filename: str) -> str: bucket_name = self.parameter_store.fetch_parameter( f'cross_region/s3_regional_bucket/{region}' ) - s3_client = S3(region, bucket_name) + kms_key_arn = self.parameter_store.fetch_parameter( + f'cross_region/kms_arn/{region}' + ) + s3_client = S3(region, bucket_name, kms_key_arn=kms_key_arn) resolved_location = s3_client.put_object( - f"adf-upload/{object_key}/{random_filename}", - str(object_key), - style, - True # pre-check + key=f"adf-upload/{object_key}/{random_filename}", + file_path=str(object_key), + style=style, + pre_check=True, ) self.cache.add(lookup_str, resolved_location) return resolved_location diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/templates/events.yml b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/templates/events.yml index 06aa630e0..481706eef 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/templates/events.yml +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/shared/templates/events.yml @@ -11,6 +11,8 @@ Resources: EventRole: Type: AWS::IAM::Role Properties: + Path: /adf/cross-account-events/ + RoleName: !Sub adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId} AssumeRolePolicyDocument: Version: 2012-10-17 Statement: @@ -19,7 +21,9 @@ Resources: Service: - events.amazonaws.com Action: sts:AssumeRole - Path: / + Condition: + ArnEquals: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:events:${AWS::Region}:${AWS::AccountId}:rule/adf-cc-event-from-${AWS::AccountId}-to-${DeploymentAccountId}" Policies: - PolicyName: !Sub events-to-${DeploymentAccountId} PolicyDocument: @@ -27,7 +31,11 @@ Resources: Statement: - Effect: Allow Action: events:PutEvents - Resource: "*" + Resource: + - !Sub "arn:${AWS::Partition}:events:${AWS::Region}:${DeploymentAccountId}:event-bus/default" + Condition: + StringEquals: + "events:detail-type": "CodeCommit Repository State Change" EventRule: Type: AWS::Events::Rule diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/tests/test_config.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/tests/test_config.py index 58edb2bd1..3246130a1 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/tests/test_config.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/tests/test_config.py @@ -69,6 +69,23 @@ def test_raise_validation_length_deployment_target_region(cls): assert cls._parse_config() +def test_sorted_regions(cls): + cls.config_contents["regions"]["deployment-account"] = [ + "us-east-1", + ] + cls.config_contents["regions"]["targets"] = [ + "us-west-2", + "us-east-1", + "eu-west-3", + ] + cls._parse_config() + assert cls.sorted_regions() == [ + "us-east-1", + "eu-west-3", + "us-west-2", + ] + + def test_raise_validation_organizations_scp(cls): cls.config_contents["config"]["scp"]["keep-default-scp"] = "blah" with raises(InvalidConfigError): diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/tests/test_main.py b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/tests/test_main.py index 1e3743ade..8d66e5b08 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/tests/test_main.py +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/adf-build/tests/test_main.py @@ -45,6 +45,7 @@ def sts(): 'Arn': 'string' } sts.assume_cross_account_role.return_value = role_mock + sts.assume_bootstrap_deployment_role.return_value = role_mock return sts @@ -133,7 +134,7 @@ def test_prepare_deployment_account_defaults(param_store_cls, cls, sts): ) for param_store in parameter_store_list: assert param_store.put_parameter.call_count == ( - 14 if param_store == deploy_param_store else 8 + 15 if param_store == deploy_param_store else 9 ) param_store.put_parameter.assert_has_calls( [ @@ -141,9 +142,10 @@ def test_prepare_deployment_account_defaults(param_store_cls, cls, sts): call('adf_log_level', 'CRITICAL'), call('cross_account_access_role', 'some_role'), call( - 'deployment_account_bucket', - 'some_deployment_account_bucket', + 'shared_modules_bucket', + 'some_shared_modules_bucket', ), + call('bootstrap_templates_bucket', 'some_bucket'), call('deployment_account_id', deployment_account_id), call('management_account_id', '123'), call('organization_id', 'o-123456789'), @@ -234,7 +236,7 @@ def test_prepare_deployment_account_specific_config(param_store_cls, cls, sts): ) for param_store in parameter_store_list: assert param_store.put_parameter.call_count == ( - 16 if param_store == deploy_param_store else 8 + 17 if param_store == deploy_param_store else 9 ) param_store.put_parameter.assert_has_calls( [ @@ -242,9 +244,10 @@ def test_prepare_deployment_account_specific_config(param_store_cls, cls, sts): call('adf_log_level', 'CRITICAL'), call('cross_account_access_role', 'some_role'), call( - 'deployment_account_bucket', - 'some_deployment_account_bucket', + 'shared_modules_bucket', + 'some_shared_modules_bucket', ), + call('bootstrap_templates_bucket', 'some_bucket'), call('deployment_account_id', deployment_account_id), call('management_account_id', '123'), call('organization_id', 'o-123456789'), diff --git a/src/lambda_codebase/initial_commit/bootstrap_repository/tox.ini b/src/lambda_codebase/initial_commit/bootstrap_repository/tox.ini index d3bb64611..df5b3f830 100644 --- a/src/lambda_codebase/initial_commit/bootstrap_repository/tox.ini +++ b/src/lambda_codebase/initial_commit/bootstrap_repository/tox.ini @@ -21,7 +21,7 @@ setenv= CODEBUILD_BUILD_ID=abcdef S3_BUCKET=some_bucket S3_BUCKET_NAME=some_bucket - DEPLOYMENT_ACCOUNT_BUCKET=some_deployment_account_bucket + SHARED_MODULES_BUCKET=some_shared_modules_bucket MANAGEMENT_ACCOUNT_ID=123 ADF_VERSION=1.0.0 ADF_LOG_LEVEL=CRITICAL diff --git a/src/lambda_codebase/initial_commit/handler.py b/src/lambda_codebase/initial_commit/handler.py index a292bb2f9..6b7904b3e 100644 --- a/src/lambda_codebase/initial_commit/handler.py +++ b/src/lambda_codebase/initial_commit/handler.py @@ -29,6 +29,8 @@ def lambda_handler(event, _context, prior_error=err): "StackId": event["StackId"], "Reason": str(prior_error), } + if not event["ResponseURL"].lower().startswith('http'): + raise ValueError('ResponseURL is forbidden') from None with urlopen( Request( event["ResponseURL"], diff --git a/src/lambda_codebase/initial_commit/initial_commit.py b/src/lambda_codebase/initial_commit/initial_commit.py index 00df901de..18aacde70 100644 --- a/src/lambda_codebase/initial_commit/initial_commit.py +++ b/src/lambda_codebase/initial_commit/initial_commit.py @@ -334,6 +334,11 @@ def generate_commits(event, repo_name, directory, parent_commit_id=None): "bootstrap_repository/adf-bootstrap/example-global-iam.yml", "/tmp/global-iam.yml", ) + initial_deploy_sample_global_iam = create_adf_config_file( + event.ResourceProperties, + "bootstrap_repository/adf-bootstrap/deployment/example-global-iam.yml", + "/tmp/global-deploy-iam.yml", + ) create_deployment_account = ( event.ResourceProperties.DeploymentAccountFullName @@ -348,6 +353,7 @@ def generate_commits(event, repo_name, directory, parent_commit_id=None): files_to_commit.append(adf_deployment_account_yml) files_to_commit.append(adf_config) files_to_commit.append(initial_sample_global_iam) + files_to_commit.append(initial_deploy_sample_global_iam) chunked_files = chunks([f.as_dict() for f in files_to_commit], 99) commit_id = parent_commit_id diff --git a/src/lambda_codebase/jump_role_manager/main.py b/src/lambda_codebase/jump_role_manager/main.py new file mode 100644 index 000000000..92937c9b4 --- /dev/null +++ b/src/lambda_codebase/jump_role_manager/main.py @@ -0,0 +1,542 @@ +# Copyright Amazon.com Inc. or its affiliates. +# SPDX-License-Identifier: MIT-0 + +""" +The Jump Role Manager main that is called when ADF is asked to bootstrap an +AWS Account that it has not bootstrapped yet. + +This manager is responsible for locking accounts that were bootstrapped before +and granting access to the privileged CrossAccountAccessRole only when we +have not other method to bootstrap/manage the AWS account. + +Theory of operation: + It accesses AWS Organizations and walks through all the accounts that are + present. + + For each account, it will test if the account is bootstrapped by + ADF before. It tests this by assuming the Test Bootstrap Role + (`adf/adf-bootstrap/adf-test-boostrap-role`) in the specific account. + If that worked, we know that the bootstrap stack is + present and we should rely on the ADF Bootstrap Update Deployment role + (`adf/adf-bootstrap/adf-bootstrap-update-deployment-role`). + + If that is not present, we should rely on the CrossAccountAccessRole + instead. +""" + +import ast +import datetime +import json +import math +import os + +from aws_xray_sdk.core import patch_all +import boto3 +from botocore.exceptions import ClientError + +# ADF imports +from logger import configure_logger +from organizations import Organizations +from parameter_store import ParameterStore +from sts import STS + +patch_all() + +LOGGER = configure_logger(__name__) + +ADF_JUMP_MANAGED_POLICY_ARN = os.getenv("ADF_JUMP_MANAGED_POLICY_ARN") +AWS_PARTITION = os.getenv("AWS_PARTITION") +AWS_REGION = os.getenv("AWS_REGION") +CROSS_ACCOUNT_ACCESS_ROLE_NAME = os.getenv("CROSS_ACCOUNT_ACCESS_ROLE_NAME") +DEPLOYMENT_ACCOUNT_ID = os.getenv("DEPLOYMENT_ACCOUNT_ID") +MANAGEMENT_ACCOUNT_ID = os.getenv("MANAGEMENT_ACCOUNT_ID") + +# Special accounts are either not considered ever (the management account) +# or are on the priority list to get bootstrapped first (deployment account) +# +# The management account is excluded, as that is not permitted to +# assume with the Cross Account Access Role anyway. +# The deployment account is prioritized as first to bootstrap as all +# other accounts will depend on the resources in the deployment account. +SPECIAL_ACCOUNT_IDS = [ + DEPLOYMENT_ACCOUNT_ID, + MANAGEMENT_ACCOUNT_ID, +] + +ADF_TEST_BOOTSTRAP_ROLE_NAME = "adf/bootstrap/adf-bootstrap-test-role" +MAX_POLICY_VERSIONS = 4 +POLICY_VALID_DURATION_IN_HOURS = 2 +INCLUDE_NEW_ACCOUNTS_IF_JOINED_IN_LAST_HOURS = 2 + +MAX_MANAGED_POLICY_LENGTH = 6144 +ZERO_ACCOUNTS_POLICY_LENGTH = 265 +CHARS_PER_ACCOUNT_ID = 15 +MAX_NUMBER_OF_ACCOUNTS = math.floor( + ( + MAX_MANAGED_POLICY_LENGTH + - ZERO_ACCOUNTS_POLICY_LENGTH + ) + / CHARS_PER_ACCOUNT_ID, +) + +IAM_CLIENT = boto3.client("iam") +ORGANIZATIONS_CLIENT = boto3.client("organizations") +TAGGING_CLIENT = boto3.client("resourcegroupstaggingapi") +CODEPIPELINE_CLIENT = boto3.client("codepipeline") + + +def _verify_bootstrap_exists(sts, account_id): + try: + sts.assume_cross_account_role( + ( + f"arn:{AWS_PARTITION}:iam::{account_id}:" + f"role/{ADF_TEST_BOOTSTRAP_ROLE_NAME}" + ), + "jump_role_manager", + ) + return True + except ClientError as error: + LOGGER.debug( + "Could not assume into %s in %s due to %s", + ADF_TEST_BOOTSTRAP_ROLE_NAME, + account_id, + error, + ) + return False + + +def _get_filtered_non_special_root_ou_accounts( + organizations, + sts, + remove_base_in_root, +): + """ + Get the list of account ids of AWS Accounts in the root OU that were + bootstrapped by ADF before. + + If the bootstrap stacks need to be removed upon a move of an ADF Account + to the root of the AWS Organization, i.e. move/to_root/action equals + either 'remove-base' or 'remove_base', then we should be allowed to use + the privileged role in root accounts too to remove the bootstrap stacks + accordingly. As deleting the stacks would also delete the required + ADF Bootstrap Update Deployment role, hence we cannot perform the action + with that role. Privileged access is only required to remove the + bootstrap stacks from those accounts. Hence it should only allow + privileged access if it is bootstrapped. + """ + root_ou_accounts = organizations.get_accounts_for_parent( + organizations.get_ou_root_id(), + ) + verified_root_ou_accounts = list(map( + lambda account: { + **account, + "Bootstrapped": _verify_bootstrap_exists( + sts, + account.get('Id'), + ), + }, + filter( + lambda account: account.get('Id') not in SPECIAL_ACCOUNT_IDS, + root_ou_accounts, + ), + )) + + new_if_joined_since = ( + datetime.datetime.now(datetime.UTC) + - datetime.timedelta( + hours=INCLUDE_NEW_ACCOUNTS_IF_JOINED_IN_LAST_HOURS, + ) + ) + filtered_root_ou_accounts = list(filter( + lambda account: ( + ( + remove_base_in_root + # Only allow privileged access to accounts that were + # bootstrapped so we are allowed to delete the stacks + and account["Bootstrapped"] + ) or ( + not account["Bootstrapped"] + # If it joined recently, we need to be able to bootstrap + # the account with privileged access + and account.get('JoinedTimestamp') > new_if_joined_since + ) + ), + verified_root_ou_accounts, + )) + return filtered_root_ou_accounts + + +def _get_non_special_adf_accessible_accounts( + organizations, + sts, + protected_ou_ids, +): + """ + Get the account ids of all AWS Accounts in this AWS Organization, + with the exception of the accounts that are inactive or located in + a protected OU. + """ + adf_accessible_accounts = organizations.get_accounts( + protected_ou_ids=protected_ou_ids, + # Exclude accounts that are in the root of the AWS Organization, + # as these would be retrieved via the + # _get_adf_bootstrapped_accounts_in_root_ou method. + include_root=False, + ) + filtered_adf_accessible_accounts = list(filter( + # Only allow privileged access to accounts that are NOT bootstrapped + lambda account: ( + account.get('Id') not in SPECIAL_ACCOUNT_IDS + and not _verify_bootstrap_exists(sts, account.get('Id')) + ), + adf_accessible_accounts, + )) + return filtered_adf_accessible_accounts + + +def _get_non_special_privileged_access_account_ids( + organizations, + sts, + protected_ou_ids, + include_root, +): + privileged_access_accounts = ( + _get_non_special_adf_accessible_accounts( + organizations, + sts, + protected_ou_ids, + ) + + _get_filtered_non_special_root_ou_accounts( + organizations, + sts, + include_root, + ) + ) + return [ + account.get("Id") for account in privileged_access_accounts + ] + + +def _get_non_bootstrapped_accounts( + organizations, + sts, + parameter_store, +): + protected_ou_ids = ast.literal_eval( + parameter_store.fetch_parameter_accept_not_found( + name='protected', + default_value='[]', + ), + ) + move_to_root_action = parameter_store.fetch_parameter_accept_not_found( + name='moves/to_root/action', + default_value='safe', + ) + include_root = move_to_root_action in ['remove-base', 'remove_base'] + + optional_deployment_account_first = ( + [] if _verify_bootstrap_exists(sts, DEPLOYMENT_ACCOUNT_ID) + else [DEPLOYMENT_ACCOUNT_ID] + ) + sorted_non_bootstrapped_account_ids = list( + optional_deployment_account_first + + # Sorted list, so we get to bootstrap the accounts in this order too + + sorted( + _get_non_special_privileged_access_account_ids( + organizations, + sts, + protected_ou_ids, + include_root, + ) + ) + ) + return sorted_non_bootstrapped_account_ids + + +def _delete_old_policy_versions(iam): + LOGGER.debug( + "Checking policy versions for %s", + ADF_JUMP_MANAGED_POLICY_ARN, + ) + response = iam.list_policy_versions( + PolicyArn=ADF_JUMP_MANAGED_POLICY_ARN, + ) + if len(response.get('Versions', [])) > MAX_POLICY_VERSIONS: + LOGGER.debug( + "Found %d policy versions, which is greater than the defined " + "maximum of %d. Hence going through the list to select one to " + "delete.", + len(response.get('Versions')), + MAX_POLICY_VERSIONS, + ) + + oldest_version_id = "z" + for version in response.get('Versions'): + if version.get('IsDefaultVersion'): + continue + oldest_version_id = min( + oldest_version_id, + version.get('VersionId', 'z'), + ) + + if oldest_version_id == "z": + raise RuntimeError( + "Failed to find the oldest policy in the " + f"list for {ADF_JUMP_MANAGED_POLICY_ARN}", + ) + + LOGGER.debug( + "Deleting policy version %s", + oldest_version_id, + ) + iam.delete_policy_version( + PolicyArn=ADF_JUMP_MANAGED_POLICY_ARN, + VersionId=oldest_version_id, + ) + + +def _get_valid_until(): + return ( + ( + datetime.datetime.now(datetime.UTC) + + datetime.timedelta(hours=POLICY_VALID_DURATION_IN_HOURS) + ) + .isoformat(timespec='seconds') + .replace('+00:00', 'Z') + ) + + +def _generate_empty_policy_document(): + return { + "Version": "2012-10-17", + "Statement": [ + # An empty list of statements is not allowed, hence creating + # a dummy statement that does not have any effect + { + "Sid": "EmptyClause", + "Effect": "Deny", + "Action": [ + # sts:AssumeRoleWithWebIdentity is not allowed by the + # inline policy of the jump role anyway. + # Hence blocking this would not cause any problems. + # + # It should not deny sts:AssumeRole here, as it might + # be granted via the + # GrantOrgWidePrivilegedBootstrapAccessFallback + # statement + "sts:AssumeRoleWithWebIdentity" + ], + "Resource": "*", + } + ] + } + + +def _generate_policy_document(non_bootstrapped_account_ids): + if not non_bootstrapped_account_ids: + # If non_bootstrapped_account_ids is empty, it should switch to + # a meaningless statement instead of stating + # Condition/StringEquals/aws:ResourceAccount == [] + # + # If the value it matches against is empty, it will evaluate to True. + # So an empty list in the condition value evaluates as if the condition + # is not present. See: + # https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html#access-analyzer-reference-policy-checks-suggestion-empty-array-condition + return _generate_empty_policy_document() + return { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "AllowNonBootstrappedAccounts", + "Effect": "Allow", + "Action": [ + "sts:AssumeRole" + ], + "Resource": [ + f"arn:aws:iam::*:role/{CROSS_ACCOUNT_ACCESS_ROLE_NAME}", + ], + "Condition": { + "DateLessThan": { + # Setting an end-time to this policy, as this function + # is invoked to bootstrap the account. Which hopefully + # turned out working. Hence, in the future, the newly + # bootstrapped accounts should use only the ADF + # Bootstrap Update Deployment role instead. + "aws:CurrentTime": _get_valid_until(), + }, + "StringEquals": { + "aws:ResourceAccount": non_bootstrapped_account_ids, + }, + } + } + ] + } + + +def _update_managed_policy(iam, non_bootstrapped_account_ids): + _delete_old_policy_versions(iam) + iam.create_policy_version( + PolicyArn=ADF_JUMP_MANAGED_POLICY_ARN, + PolicyDocument=json.dumps( + _generate_policy_document(non_bootstrapped_account_ids), + ), + SetAsDefault=True, + ) + + +def _process_update_request(iam, organizations, parameter_store, sts): + non_bootstrapped_account_ids = _get_non_bootstrapped_accounts( + organizations, + sts, + parameter_store, + ) + _update_managed_policy( + iam, + # Limit the list of account ids to add to the policy to the + # MAX_NUMBER_OF_ACCOUNTS as more accounts would not fit in + # a single managed policy. This limit would be 391 accounts. + # If more accounts need to be bootstrapped, it needs to be performed + # in multiple iterations. Once they are all bootstrapped, this list + # will be very small or empty even. + non_bootstrapped_account_ids[:MAX_NUMBER_OF_ACCOUNTS], + ) + return { + "granted_access_to": non_bootstrapped_account_ids[ + :MAX_NUMBER_OF_ACCOUNTS + ], + "of_total_non_bootstrapped": len(non_bootstrapped_account_ids), + "valid_until": ( + _get_valid_until() if non_bootstrapped_account_ids + else None + ), + } + + +def _build_summary(result): + number_of_accounts_granted = len(result.get('granted_access_to', [])) + if number_of_accounts_granted: + return ( + "Task completed. Granted ADF Account-Bootstrapping Jump Role " + "privileged cross-account access " + f"to: {number_of_accounts_granted} " + f"of total {result.get('of_total_non_bootstrapped', 0)} " + "non-bootstrapped AWS accounts." + f"Access granted via the {CROSS_ACCOUNT_ACCESS_ROLE_NAME} role " + f"until {result.get('valid_until')}." + ) + return ( + "Task completed. The ADF Account-Bootstrapping Jump Role does not " + "require privileged cross-account access. Access granted to the ADF " + "Bootstrap Update Deployment role only." + ) + + +def _report_success_and_log( + result, + codepipeline, + codepipeline_job_id, + exec_id, +): + summary = _build_summary(result) + LOGGER.info(summary) + if result.get('granted_access_to', []): + LOGGER.info( + "Specific accounts that were granted access to: %s", + ", ".join(result.get('granted_access_to', [])), + ) + if codepipeline_job_id: + LOGGER.debug( + "Reporting success to CodePipeline %s", + codepipeline_job_id, + ) + codepipeline.put_job_success_result( + jobId=codepipeline_job_id, + executionDetails={ + "externalExecutionId": exec_id, + "summary": summary, + "percentComplete": 100, + } + ) + + +def _report_failure_and_log(error, codepipeline, codepipeline_job_id, exec_id): + LOGGER.exception(error) + summary = ( + "Task failed. Granting the ADF Account-Bootstrapping Jump Role " + f"privileged cross-account access failed due to an error: {error}." + ) + LOGGER.error(summary) + if codepipeline_job_id: + LOGGER.debug( + "Reporting failure to CodePipeline %s", + codepipeline_job_id, + ) + codepipeline.put_job_failure_result( + jobId=codepipeline_job_id, + failureDetails={ + "externalExecutionId": exec_id, + "type": "JobFailed", + "message": summary, + } + ) + return { + "error": summary, + } + + +def _handle_event( + iam, + organizations, + parameter_store, + sts, + codepipeline, + event, + exec_id, +): + codepipeline_job_id = event.get('CodePipeline.job', {}).get('id') + try: + result = _process_update_request( + iam, + organizations, + parameter_store, + sts, + ) + _report_success_and_log( + result, + codepipeline, + codepipeline_job_id, + exec_id, + ) + return { + **event, + "grant_access_result": result, + } + except ClientError as error: + return _report_failure_and_log( + error, + codepipeline, + codepipeline_job_id, + exec_id, + ) + + +def lambda_handler(event, context): + organizations = Organizations( + org_client=ORGANIZATIONS_CLIENT, + tagging_client=TAGGING_CLIENT, + ) + parameter_store = ParameterStore( + region=AWS_REGION, + role=boto3, + ) + sts = STS() + return _handle_event( + iam=IAM_CLIENT, + organizations=organizations, + parameter_store=parameter_store, + sts=sts, + codepipeline=CODEPIPELINE_CLIENT, + event=event, + exec_id=context.log_stream_name, + ) diff --git a/src/lambda_codebase/jump_role_manager/pytest.ini b/src/lambda_codebase/jump_role_manager/pytest.ini new file mode 100644 index 000000000..ac18618ea --- /dev/null +++ b/src/lambda_codebase/jump_role_manager/pytest.ini @@ -0,0 +1,5 @@ +# Copyright Amazon.com Inc. or its affiliates. +# SPDX-License-Identifier: MIT-0 + +[pytest] +testpaths = tests diff --git a/src/lambda_codebase/jump_role_manager/requirements.txt b/src/lambda_codebase/jump_role_manager/requirements.txt new file mode 100644 index 000000000..2542bd380 --- /dev/null +++ b/src/lambda_codebase/jump_role_manager/requirements.txt @@ -0,0 +1,2 @@ +aws-xray-sdk==2.13.0 +pyyaml~=6.0.1 diff --git a/src/lambda_codebase/jump_role_manager/tests/__init__.py b/src/lambda_codebase/jump_role_manager/tests/__init__.py new file mode 100644 index 000000000..014883ae9 --- /dev/null +++ b/src/lambda_codebase/jump_role_manager/tests/__init__.py @@ -0,0 +1,4 @@ +# Copyright Amazon.com Inc. or its affiliates. +# SPDX-License-Identifier: MIT-0 + +# pylint: skip-file diff --git a/src/lambda_codebase/jump_role_manager/tests/test_main.py b/src/lambda_codebase/jump_role_manager/tests/test_main.py new file mode 100644 index 000000000..4a87f4169 --- /dev/null +++ b/src/lambda_codebase/jump_role_manager/tests/test_main.py @@ -0,0 +1,1199 @@ +# Copyright Amazon.com Inc. or its affiliates. +# SPDX-License-Identifier: MIT-0 + +# pylint: skip-file + +import datetime +import json +from unittest.mock import Mock, patch, call + +import pytest +from botocore.exceptions import ClientError + +from aws_xray_sdk import global_sdk_config +from main import ( + ADF_JUMP_MANAGED_POLICY_ARN, + ADF_TEST_BOOTSTRAP_ROLE_NAME, + CROSS_ACCOUNT_ACCESS_ROLE_NAME, + INCLUDE_NEW_ACCOUNTS_IF_JOINED_IN_LAST_HOURS, + MAX_NUMBER_OF_ACCOUNTS, + MAX_POLICY_VERSIONS, + POLICY_VALID_DURATION_IN_HOURS, + _build_summary, + _delete_old_policy_versions, + _generate_policy_document, + _get_non_bootstrapped_accounts, + _get_valid_until, + _handle_event, + _process_update_request, + _report_failure_and_log, + _report_success_and_log, + _update_managed_policy, + _verify_bootstrap_exists, +) + + +@pytest.fixture +def mock_codepipeline(): + return Mock() + + +@pytest.fixture +def mock_iam(): + return Mock() + + +@pytest.fixture +def mock_sts(): + return Mock() + + +@pytest.fixture +def mock_parameter_store(): + mock_parameter_store = Mock() + mock_parameter_store.fetch_parameter_accept_not_found.side_effect = [ + "['ou1', 'ou2']", + "safe", + ] + return mock_parameter_store + + +@pytest.fixture +def mock_organizations(): + return Mock() +# --------------------------------------------------------- + + +def test_max_number_of_accounts(): + assert MAX_NUMBER_OF_ACCOUNTS == 391 + + +def test_max_policy_versions(): + assert MAX_POLICY_VERSIONS > 1 + assert MAX_POLICY_VERSIONS < 6 + + +def test_policy_valid_duration_in_hours(): + assert POLICY_VALID_DURATION_IN_HOURS > 0 + assert POLICY_VALID_DURATION_IN_HOURS < 4 +# --------------------------------------------------------- + + +@patch("main._report_failure_and_log") +@patch("main._report_success_and_log") +@patch("main._process_update_request") +def test_handle_event_success( + process_mock, + report_success_mock, + report_failure_mock, + mock_codepipeline, + mock_iam, + mock_sts, + mock_parameter_store, + mock_organizations, +): + """ + Test _handle_event with a successful execution + """ + event = { + "CodePipeline.job": { + "id": "cp-id", + }, + } + process_result = "The Result" + exec_id = "some-exec-id", + process_mock.return_value = process_result + + result = _handle_event( + mock_iam, + mock_organizations, + mock_parameter_store, + mock_sts, + mock_codepipeline, + event, + exec_id, + ) + + assert result == { + **event, + "grant_access_result": process_result, + } + + process_mock.assert_called_once_with( + mock_iam, + mock_organizations, + mock_parameter_store, + mock_sts, + ) + report_success_mock.assert_called_once_with( + process_result, + mock_codepipeline, + "cp-id", + exec_id, + ) + report_failure_mock.assert_not_called() + + +@patch("main._report_failure_and_log") +@patch("main._report_success_and_log") +@patch("main._process_update_request") +def test_handle_event_failure( + process_mock, + report_success_mock, + report_failure_mock, + mock_codepipeline, + mock_iam, + mock_sts, + mock_parameter_store, + mock_organizations, +): + """ + Test _handle_event with a failed execution + """ + event = { + "CodePipeline.job": { + "id": "cp-id", + }, + } + error = ClientError( + error_response={'Error': {'Code': 'AccessDenied'}}, + operation_name='SomeOperation' + ) + exec_id = "some-exec-id", + process_mock.side_effect = error + + _handle_event( + mock_iam, + mock_organizations, + mock_parameter_store, + mock_sts, + mock_codepipeline, + event, + exec_id, + ) + + process_mock.assert_called_once_with( + mock_iam, + mock_organizations, + mock_parameter_store, + mock_sts, + ) + report_success_mock.assert_not_called() + report_failure_mock.assert_called_once_with( + error, + mock_codepipeline, + "cp-id", + exec_id, + ) +# --------------------------------------------------------- + + +@patch('main.LOGGER') +@patch('main._build_summary') +def test_report_success_and_log_no_privileged_access_sfn( + summary_mock, + logger, + mock_codepipeline, +): + result = { + "granted_access_to": [], + "of_total_non_bootstrapped": 0, + "valid_until": None, + } + summary = 'The summary' + summary_mock.return_value = summary + + _report_success_and_log( + result, + mock_codepipeline, + None, + 'some-exec-id', + ) + + summary_mock.assert_called_once_with(result) + logger.info.assert_called_once_with(summary) + logger.debug.assert_not_called() + + mock_codepipeline.put_job_success_result.assert_not_called() + mock_codepipeline.put_job_failure_result.assert_not_called() + + +@patch('main.LOGGER') +@patch('main._build_summary') +def test_report_success_and_log_with_privileged_access_sfn( + summary_mock, + logger, + mock_codepipeline, +): + result = { + "granted_access_to": ['111111111111', '222222222222'], + "of_total_non_bootstrapped": 3, + "valid_until": '2024-04-03T14:00:00Z', + } + summary = 'The summary' + summary_mock.return_value = summary + + _report_success_and_log( + result, + mock_codepipeline, + None, + 'some-exec-id', + ) + + summary_mock.assert_called_once_with(result) + logger.info.assert_has_calls([ + call(summary), + call( + "Specific accounts that were granted access to: %s", + "111111111111, 222222222222", + ), + ]) + logger.debug.assert_not_called() + + mock_codepipeline.put_job_success_result.assert_not_called() + mock_codepipeline.put_job_failure_result.assert_not_called() + + +@patch('main.LOGGER') +@patch('main._build_summary') +def test_report_success_and_log_no_privileged_access_codepipeline( + summary_mock, + logger, + mock_codepipeline, +): + result = { + "granted_access_to": [], + "of_total_non_bootstrapped": 0, + "valid_until": None, + } + summary = 'The summary' + summary_mock.return_value = summary + + _report_success_and_log( + result, + mock_codepipeline, + 'cp-id', + 'some-exec-id', + ) + + summary_mock.assert_called_once_with(result) + logger.info.assert_called_once_with(summary) + logger.debug.assert_called_once_with( + "Reporting success to CodePipeline %s", + "cp-id", + ) + + mock_codepipeline.put_job_success_result.assert_called_once_with( + jobId="cp-id", + executionDetails={ + "externalExecutionId": "some-exec-id", + "summary": summary, + "percentComplete": 100, + }, + ) + mock_codepipeline.put_job_failure_result.assert_not_called() + + +@patch('main.LOGGER') +@patch('main._build_summary') +def test_report_success_and_log_with_privileged_access_codepipeline( + summary_mock, + logger, + mock_codepipeline, +): + result = { + "granted_access_to": ['111111111111', '222222222222'], + "of_total_non_bootstrapped": 3, + "valid_until": '2024-04-03T14:00:00Z', + } + summary = 'The summary' + summary_mock.return_value = summary + + _report_success_and_log( + result, + mock_codepipeline, + 'cp-id', + 'some-exec-id', + ) + + summary_mock.assert_called_once_with(result) + logger.info.assert_has_calls([ + call(summary), + call( + "Specific accounts that were granted access to: %s", + "111111111111, 222222222222", + ), + ]) + logger.debug.assert_called_once_with( + "Reporting success to CodePipeline %s", + "cp-id", + ) + + mock_codepipeline.put_job_success_result.assert_called_once_with( + jobId="cp-id", + executionDetails={ + "externalExecutionId": "some-exec-id", + "summary": summary, + "percentComplete": 100, + }, + ) + mock_codepipeline.put_job_failure_result.assert_not_called() +# --------------------------------------------------------- + + +@patch('main.LOGGER') +def test_report_failure_and_log_sfn( + logger, + mock_codepipeline, +): + error = ClientError( + error_response={'Error': {'Code': 'AccessDenied'}}, + operation_name='SomeOperation' + ) + summary = ( + "Task failed. Granting the ADF Account-Bootstrapping Jump Role " + f"privileged cross-account access failed due to an error: {error}." + ) + + result = _report_failure_and_log( + error, + mock_codepipeline, + None, + 'some-exec-id', + ) + + assert result == { + "error": summary, + } + + logger.error.assert_called_once_with(summary) + logger.debug.assert_not_called() + + mock_codepipeline.put_job_success_result.assert_not_called() + mock_codepipeline.put_job_failure_result.assert_not_called() + + +@patch('main.LOGGER') +@patch('main._build_summary') +def test_report_failure_and_log_codepipeline( + summary_mock, + logger, + mock_codepipeline, +): + error = ClientError( + error_response={'Error': {'Code': 'AccessDenied'}}, + operation_name='SomeOperation' + ) + summary = ( + "Task failed. Granting the ADF Account-Bootstrapping Jump Role " + f"privileged cross-account access failed due to an error: {error}." + ) + + result = _report_failure_and_log( + error, + mock_codepipeline, + 'cp-id', + 'some-exec-id', + ) + + assert result == { + "error": summary, + } + + logger.error.assert_called_once_with(summary) + logger.debug.assert_called_once_with( + "Reporting failure to CodePipeline %s", + "cp-id", + ) + + mock_codepipeline.put_job_success_result.assert_not_called() + mock_codepipeline.put_job_failure_result.assert_called_once_with( + jobId="cp-id", + failureDetails={ + "externalExecutionId": "some-exec-id", + "type": "JobFailed", + "message": summary, + }, + ) +# --------------------------------------------------------- + + +def test_build_summary_no_privileged_access(): + result = { + "granted_access_to": [], + "of_total_non_bootstrapped": 0, + "valid_until": None, + } + + summary = _build_summary(result) + + assert summary == ( + "Task completed. The ADF Account-Bootstrapping Jump Role does not " + "require privileged cross-account access. Access granted to the ADF " + "Bootstrap Update Deployment role only." + ) + + +def test_build_summary_with_privileged_access(): + result = { + "granted_access_to": ['111111111111', '222222222222'], + "of_total_non_bootstrapped": 3, + "valid_until": '2024-04-03T14:00:00Z', + } + + summary = _build_summary(result) + + assert summary == ( + "Task completed. Granted ADF Account-Bootstrapping Jump Role " + "privileged cross-account access to: 2 " + "of total 3 non-bootstrapped AWS accounts." + f"Access granted via the {CROSS_ACCOUNT_ACCESS_ROLE_NAME} role " + "until 2024-04-03T14:00:00Z." + ) + + +# --------------------------------------------------------- + + +@patch("main._get_valid_until") +@patch("main._update_managed_policy") +@patch("main._get_non_bootstrapped_accounts") +def test_process_update_request_no_non_bootstrapped_accounts( + get_mock, + update_mock, + valid_until_mock, + mock_iam, + mock_sts, + mock_parameter_store, + mock_organizations, +): + """ + Test case when there are no non-bootstrapped accounts + """ + get_mock.return_value = [] + + result = _process_update_request( + mock_iam, + mock_organizations, + mock_parameter_store, + mock_sts, + ) + + assert result == { + "granted_access_to": [], + "of_total_non_bootstrapped": 0, + "valid_until": None, + } + + get_mock.assert_called_once_with( + mock_organizations, + mock_sts, + mock_parameter_store, + ) + update_mock.assert_called_once_with( + mock_iam, + [], + ) + valid_until_mock.assert_not_called() + + +@patch("main._get_valid_until") +@patch("main._update_managed_policy") +@patch("main._get_non_bootstrapped_accounts") +def test_process_update_request_with_non_bootstrapped_accounts( + get_mock, + update_mock, + valid_until_mock, + mock_iam, + mock_sts, + mock_parameter_store, + mock_organizations, +): + """ + Test case when there are non-bootstrapped accounts + """ + non_bootstrapped_account_ids = [ + '111111111111', + '222222222222', + '333333333333', + ] + valid_until = '2024-04-03T14:00:00Z' + valid_until_mock.return_value = valid_until + get_mock.return_value = non_bootstrapped_account_ids + + result = _process_update_request( + mock_iam, + mock_organizations, + mock_parameter_store, + mock_sts, + ) + + assert result == { + "granted_access_to": non_bootstrapped_account_ids, + "of_total_non_bootstrapped": len(non_bootstrapped_account_ids), + "valid_until": valid_until, + } + + get_mock.assert_called_once_with( + mock_organizations, + mock_sts, + mock_parameter_store, + ) + update_mock.assert_called_once_with( + mock_iam, + get_mock.return_value, + ) + valid_until_mock.assert_called_once_with() + + +@patch("main._get_valid_until") +@patch("main._update_managed_policy") +@patch("main._get_non_bootstrapped_accounts") +def test_process_update_request_with_more_non_bootstrapped_accounts_than_max( + get_mock, + update_mock, + valid_until_mock, + monkeypatch, + mock_iam, + mock_sts, + mock_parameter_store, + mock_organizations, +): + """ + Test case when there are more non-bootstrapped accounts than the + configured MAX_NUMBER_OF_ACCOUNTS + """ + non_bootstrapped_account_ids = [ + '111111111111', + '222222222222', + '333333333333', + ] + get_mock.return_value = non_bootstrapped_account_ids + valid_until = '2024-04-03T14:00:00Z' + valid_until_mock.return_value = valid_until + monkeypatch.setattr('main.MAX_NUMBER_OF_ACCOUNTS', 2) + + result = _process_update_request( + mock_iam, + mock_organizations, + mock_parameter_store, + mock_sts, + ) + + assert result == { + "granted_access_to": non_bootstrapped_account_ids[:2], + "of_total_non_bootstrapped": len(non_bootstrapped_account_ids), + "valid_until": valid_until, + } + + get_mock.assert_called_once_with( + mock_organizations, + mock_sts, + mock_parameter_store, + ) + update_mock.assert_called_once_with( + mock_iam, + ['111111111111', '222222222222'], + ) + valid_until_mock.assert_called_once_with() +# --------------------------------------------------------- + + +@patch("main._delete_old_policy_versions") +@patch("main._generate_policy_document") +def test_update_managed_policy(gen_mock, del_mock, mock_iam): + non_bootstrapped_account_ids = [ + '111111111111', + '222222222222', + '333333333333', + ] + gen_mock.return_value = { + "Some": "Policy Doc", + } + _update_managed_policy(mock_iam, non_bootstrapped_account_ids) + + del_mock.assert_called_once_with(mock_iam) + mock_iam.create_policy_version.assert_called_once_with( + PolicyArn=ADF_JUMP_MANAGED_POLICY_ARN, + PolicyDocument=json.dumps( + gen_mock.return_value, + ), + SetAsDefault=True, + ) +# --------------------------------------------------------- + + +@patch('main.datetime') +def test_get_valid_until(dt_mock): + mock_utc_now = datetime.datetime(2024, 4, 3, 12, 0, 0, tzinfo=datetime.UTC) + dt_mock.datetime.now.return_value = mock_utc_now + dt_mock.timedelta.return_value = datetime.timedelta( + hours=POLICY_VALID_DURATION_IN_HOURS, + ) + # Shifted by 2 hours due to shift of POLICY_VALID_DURATION_IN_HOURS + expected_end_time = '2024-04-03T14:00:00Z' + assert _get_valid_until() == expected_end_time + + +@patch('main.datetime') +def test_get_valid_until_valid_duration(dt_mock): + mock_utc_now = datetime.datetime(2024, 4, 3, 12, 0, 0, tzinfo=datetime.UTC) + dt_mock.datetime.now.return_value = mock_utc_now + dt_mock.timedelta.return_value = datetime.timedelta( + hours=POLICY_VALID_DURATION_IN_HOURS, + ) + + expected_duration = datetime.timedelta( + hours=POLICY_VALID_DURATION_IN_HOURS, + ) + end_time = datetime.datetime.fromisoformat( + _get_valid_until().replace('Z', '+00:00'), + ) + assert end_time - mock_utc_now == expected_duration +# --------------------------------------------------------- + + +@patch("main._get_valid_until") +def test_generate_policy_document_no_accounts_to_bootstrap(get_mock): + end_time = '2024-04-03T14:00:00Z' + get_mock.return_value = end_time + non_bootstrapped_account_ids = [] + expected_policy = { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "EmptyClause", + "Effect": "Deny", + "Action": ["sts:AssumeRoleWithWebIdentity"], + "Resource": "*", + } + ] + } + + policy = _generate_policy_document(non_bootstrapped_account_ids) + assert policy == expected_policy + + +@patch("main._get_valid_until") +def test_generate_policy_document(get_mock): + end_time = '2024-04-03T14:00:00Z' + get_mock.return_value = end_time + non_bootstrapped_account_ids = [ + '111111111111', + '222222222222', + '333333333333', + ] + expected_policy = { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "AllowNonBootstrappedAccounts", + "Effect": "Allow", + "Action": ["sts:AssumeRole"], + "Resource": [ + f"arn:aws:iam::*:role/{CROSS_ACCOUNT_ACCESS_ROLE_NAME}", + ], + "Condition": { + "DateLessThan": { + "aws:CurrentTime": end_time, + }, + "StringEquals": { + "aws:ResourceAccount": non_bootstrapped_account_ids, + }, + } + } + ] + } + + policy = _generate_policy_document(non_bootstrapped_account_ids) + assert policy == expected_policy +# --------------------------------------------------------- + + +def test_delete_old_policy_versions_below_max(mock_iam): + mock_iam.list_policy_versions.return_value = { + "Versions": [ + {"VersionId": "v1", "IsDefaultVersion": True}, + {"VersionId": "v2", "IsDefaultVersion": False}, + {"VersionId": "v3", "IsDefaultVersion": False}, + ] + } + + _delete_old_policy_versions(mock_iam) + + mock_iam.delete_policy_version.assert_not_called() + + +@patch('main.LOGGER') +def test_delete_old_policy_versions_above_max(logger, mock_iam): + mock_iam.list_policy_versions.return_value = { + "Versions": [ + {"VersionId": "v1", "IsDefaultVersion": True}, + {"VersionId": "v2", "IsDefaultVersion": False}, + {"VersionId": "v3", "IsDefaultVersion": False}, + {"VersionId": "v4", "IsDefaultVersion": False}, + {"VersionId": "v5", "IsDefaultVersion": False}, + ] + } + + _delete_old_policy_versions(mock_iam) + + mock_iam.delete_policy_version.assert_called_once_with( + PolicyArn=ADF_JUMP_MANAGED_POLICY_ARN, + VersionId="v2", + ) + logger.debug.assert_has_calls([ + call("Checking policy versions for %s", ADF_JUMP_MANAGED_POLICY_ARN), + call( + "Found %d policy versions, which is greater than the defined " + "maximum of %d. Hence going through the list to select one " + "to delete.", + 5, + 4, + ), + call("Deleting policy version %s", "v2"), + ]) + + +@patch('main.LOGGER') +def test_delete_old_policy_versions_above_max_out_of_order(logger, mock_iam): + mock_iam.list_policy_versions.return_value = { + "Versions": [ + {"VersionId": "v2", "IsDefaultVersion": False}, + {"VersionId": "v3", "IsDefaultVersion": False}, + {"VersionId": "v1", "IsDefaultVersion": True}, + {"VersionId": "v4", "IsDefaultVersion": False}, + {"VersionId": "v5", "IsDefaultVersion": False}, + ] + } + + _delete_old_policy_versions(mock_iam) + + mock_iam.delete_policy_version.assert_called_once_with( + PolicyArn=ADF_JUMP_MANAGED_POLICY_ARN, + VersionId="v2", + ) + logger.debug.assert_has_calls([ + call("Checking policy versions for %s", ADF_JUMP_MANAGED_POLICY_ARN), + call( + "Found %d policy versions, which is greater than the defined " + "maximum of %d. Hence going through the list to select one " + "to delete.", + 5, + 4, + ), + call("Deleting policy version %s", "v2"), + ]) + + +@patch('main.LOGGER') +def test_delete_old_policy_versions_should_never_happen(logger, mock_iam): + mock_iam.list_policy_versions.return_value = { + "Versions": [ + {"IsDefaultVersion": False}, + {"IsDefaultVersion": False}, + {"IsDefaultVersion": True}, + {"IsDefaultVersion": False}, + {"IsDefaultVersion": False}, + ] + } + + with pytest.raises(RuntimeError) as excinfo: + _delete_old_policy_versions(mock_iam) + + correct_error_message = ( + "Failed to find the oldest policy in the " + f"list for {ADF_JUMP_MANAGED_POLICY_ARN}" + ) + error_message = str(excinfo.value) + assert error_message.find(correct_error_message) >= 0 + + mock_iam.delete_policy_version.assert_not_called() + logger.debug.assert_has_calls([ + call("Checking policy versions for %s", ADF_JUMP_MANAGED_POLICY_ARN), + call( + "Found %d policy versions, which is greater than the defined " + "maximum of %d. Hence going through the list to select one " + "to delete.", + 5, + 4, + ), + ]) +# --------------------------------------------------------- + + +@patch("main._verify_bootstrap_exists") +def test_get_non_bootstrapped_accounts_no_accounts( + verify_mock, + mock_organizations, + mock_sts, + mock_parameter_store, + monkeypatch, +): + # Mock the organizations.get_accounts function to return an empty list + mock_organizations.get_accounts.return_value = [] + management_account_id = '999999999999' + deployment_account_id = '888888888888' + verify_mock.side_effect = ( + lambda sts, account_id: account_id == deployment_account_id + ) + + mock_organizations.get_ou_root_id.return_value = 'r-123' + mock_organizations.get_accounts_for_parent.return_value = [] + monkeypatch.setattr('main.MANAGEMENT_ACCOUNT_ID', management_account_id) + monkeypatch.setattr('main.DEPLOYMENT_ACCOUNT_ID', deployment_account_id) + monkeypatch.setattr('main.SPECIAL_ACCOUNT_IDS', [ + management_account_id, + deployment_account_id, + ]) + + # Call the function with mocked inputs + result = _get_non_bootstrapped_accounts( + mock_organizations, + mock_sts, + mock_parameter_store, + ) + + assert not result + mock_organizations.get_accounts.assert_called_once_with( + protected_ou_ids=['ou1', 'ou2'], + include_root=False, + ) + verify_mock.assert_called_once_with( + mock_sts, + deployment_account_id, + ) + mock_parameter_store.fetch_parameter_accept_not_found.assert_has_calls([ + call(name='protected', default_value='[]'), + call(name='moves/to_root/action', default_value='safe') + ]) + + +@patch("main._verify_bootstrap_exists") +def test_get_non_bootstrapped_accounts_only_deployment_account( + verify_mock, + mock_organizations, + mock_sts, + mock_parameter_store, + monkeypatch, +): + management_account_id = '999999999999' + deployment_account_id = '888888888888' + mock_organizations.get_accounts.return_value = [ + { + "Id": deployment_account_id, + }, + ] + verify_mock.side_effect = ( + lambda sts, account_id: account_id != deployment_account_id + ) + + mock_organizations.get_ou_root_id.return_value = 'r-123' + mock_organizations.get_accounts_for_parent.return_value = [] + monkeypatch.setattr('main.MANAGEMENT_ACCOUNT_ID', management_account_id) + monkeypatch.setattr('main.DEPLOYMENT_ACCOUNT_ID', deployment_account_id) + monkeypatch.setattr('main.SPECIAL_ACCOUNT_IDS', [ + management_account_id, + deployment_account_id, + ]) + + # Call the function with mocked inputs + result = _get_non_bootstrapped_accounts( + mock_organizations, + mock_sts, + mock_parameter_store, + ) + + assert [deployment_account_id] == result + mock_organizations.get_accounts.assert_called_once_with( + protected_ou_ids=['ou1', 'ou2'], + include_root=False, + ) + verify_mock.assert_called_once_with( + mock_sts, + deployment_account_id, + ) + + +@patch("main._verify_bootstrap_exists") +def test_get_non_bootstrapped_accounts_all_bootstrapped( + verify_mock, + mock_organizations, + mock_sts, + mock_parameter_store, + monkeypatch, +): + management_account_id = '999999999999' + deployment_account_id = '888888888888' + # Mock the organizations.get_accounts function to return an empty list + mock_organizations.get_accounts.return_value = list(map( + lambda account_id: { + "Id": account_id, + }, + [ + management_account_id, + '333333333333', + deployment_account_id, + '111111111111', + '222222222222', + ], + )) + verify_mock.return_value = True + + mock_organizations.get_ou_root_id.return_value = 'r-123' + mock_organizations.get_accounts_for_parent.return_value = [] + monkeypatch.setattr('main.MANAGEMENT_ACCOUNT_ID', management_account_id) + monkeypatch.setattr('main.DEPLOYMENT_ACCOUNT_ID', deployment_account_id) + monkeypatch.setattr('main.SPECIAL_ACCOUNT_IDS', [ + management_account_id, + deployment_account_id, + ]) + + # Call the function with mocked inputs + result = _get_non_bootstrapped_accounts( + mock_organizations, + mock_sts, + mock_parameter_store, + ) + + assert not result + mock_organizations.get_accounts.assert_called_once_with( + protected_ou_ids=['ou1', 'ou2'], + include_root=False, + ) + verify_mock.assert_has_calls( + [ + call(mock_sts, deployment_account_id), + call(mock_sts, '111111111111'), + call(mock_sts, '222222222222'), + call(mock_sts, '333333333333'), + ], + any_order=True, + ) + + +@patch("main._verify_bootstrap_exists") +def test_get_non_bootstrapped_accounts_none_bootstrapped( + verify_mock, + mock_organizations, + mock_sts, + mock_parameter_store, + monkeypatch, +): + management_account_id = '999999999999' + deployment_account_id = '888888888888' + # Mock the organizations.get_accounts function to return an empty list + mock_organizations.get_accounts.return_value = list(map( + lambda account_id: { + "Id": account_id, + }, + [ + management_account_id, + '333333333333', + deployment_account_id, + '111111111111', + '222222222222', + ], + )) + protected_ou_ids = ['ou1', 'ou2'] + verify_mock.return_value = False + + mock_organizations.get_ou_root_id.return_value = 'r-123' + mock_organizations.get_accounts_for_parent.return_value = [] + monkeypatch.setattr('main.MANAGEMENT_ACCOUNT_ID', management_account_id) + monkeypatch.setattr('main.DEPLOYMENT_ACCOUNT_ID', deployment_account_id) + monkeypatch.setattr('main.SPECIAL_ACCOUNT_IDS', [ + management_account_id, + deployment_account_id, + ]) + + # Call the function with mocked inputs + result = _get_non_bootstrapped_accounts( + mock_organizations, + mock_sts, + mock_parameter_store, + ) + + assert result == [ + # In this specific order: + deployment_account_id, + '111111111111', + '222222222222', + '333333333333', + ] + mock_organizations.get_accounts.assert_called_once_with( + protected_ou_ids=protected_ou_ids, + include_root=False, + ) + verify_mock.assert_has_calls( + [ + call(mock_sts, deployment_account_id), + call(mock_sts, '111111111111'), + call(mock_sts, '222222222222'), + call(mock_sts, '333333333333'), + ], + any_order=True, + ) + + +@pytest.mark.parametrize( + "move_action, include_root", + [ + pytest.param("remove-base", True), + pytest.param("remove_base", True), + pytest.param("safe", False), + pytest.param(None, False), + pytest.param("", False), + pytest.param("other", False), + ] +) +@patch("main._verify_bootstrap_exists") +def test_get_non_bootstrapped_accounts_include_root( + verify_mock, + move_action, + include_root, + mock_organizations, + mock_sts, + mock_parameter_store, + monkeypatch, +): + mock_parameter_store.fetch_parameter_accept_not_found = Mock() + mock_parameter_store.fetch_parameter_accept_not_found.side_effect = [ + "['ou1', 'ou2']", + move_action, + ] + management_account_id = '999999999999' + deployment_account_id = '888888888888' + some_non_bootstrapped_root_ou_account_id = '111111111111' + new_non_bootstrapped_root_ou_account_id = '666666666666' + bootstrapped_root_ou_account_id = '444444444444' + root_ou_id = 'r-abc' + new_account_joined_date = ( + datetime.datetime.now(datetime.UTC) + - datetime.timedelta( + hours=INCLUDE_NEW_ACCOUNTS_IF_JOINED_IN_LAST_HOURS, + ) + + datetime.timedelta( + minutes=1, + ) + ) + old_account_joined_date = ( + datetime.datetime.now(datetime.UTC) + - datetime.timedelta( + hours=INCLUDE_NEW_ACCOUNTS_IF_JOINED_IN_LAST_HOURS, + minutes=1, + ) + ) + mock_organizations.get_ou_root_id.return_value = root_ou_id + mock_organizations.get_accounts_for_parent.return_value = list(map( + lambda account_id: { + "Id": account_id, + "JoinedTimestamp": ( + old_account_joined_date + if account_id == some_non_bootstrapped_root_ou_account_id + else new_account_joined_date + ), + }, + [ + some_non_bootstrapped_root_ou_account_id, + new_non_bootstrapped_root_ou_account_id, + bootstrapped_root_ou_account_id, + ], + )) + # Mock the organizations.get_accounts function to return an empty list + mock_organizations.get_accounts.return_value = list(map( + lambda account_id: { + "Id": account_id, + }, + [ + management_account_id, + '333333333333', + deployment_account_id, + '555555555555', + '222222222222', + ], + )) + protected_ou_ids = ['ou1', 'ou2'] + boostrapped_account_ids = [ + bootstrapped_root_ou_account_id, + '555555555555', + ] + verify_mock.side_effect = lambda _, x: x in boostrapped_account_ids + + monkeypatch.setattr('main.MANAGEMENT_ACCOUNT_ID', management_account_id) + monkeypatch.setattr('main.DEPLOYMENT_ACCOUNT_ID', deployment_account_id) + monkeypatch.setattr('main.SPECIAL_ACCOUNT_IDS', [ + management_account_id, + deployment_account_id, + ]) + + # Call the function with mocked inputs + result = _get_non_bootstrapped_accounts( + mock_organizations, + mock_sts, + mock_parameter_store, + ) + + expected_result = [ + # In this specific order: + deployment_account_id, + '222222222222', + '333333333333', + ] + if include_root: + expected_result.append(bootstrapped_root_ou_account_id) + expected_result.append(new_non_bootstrapped_root_ou_account_id) + assert result == expected_result + + mock_organizations.get_accounts.assert_called_once_with( + protected_ou_ids=protected_ou_ids, + include_root=False, + ) + mock_organizations.get_ou_root_id.assert_called_once_with() + mock_organizations.get_accounts_for_parent.assert_called_once_with( + root_ou_id, + ) + + verify_call_list = [ + call(mock_sts, deployment_account_id), + call(mock_sts, '222222222222'), + call(mock_sts, '333333333333'), + call(mock_sts, '555555555555'), + call(mock_sts, new_non_bootstrapped_root_ou_account_id), + ] + if include_root: + verify_call_list.append( + call(mock_sts, some_non_bootstrapped_root_ou_account_id), + ) + verify_call_list.append( + call(mock_sts, bootstrapped_root_ou_account_id), + ) + verify_mock.assert_has_calls(verify_call_list, any_order=True) +# --------------------------------------------------------- + + +@patch('main.LOGGER') +def test_verify_bootstrap_exists_success(logger, mock_sts): + # Mocking the successful case + mock_sts.assume_cross_account_role.return_value = {} + + assert _verify_bootstrap_exists(mock_sts, '111111111111') + logger.debug.assert_not_called() + + +@patch('main.LOGGER') +def test_verify_bootstrap_exists_failure(logger, mock_sts): + account_id = '111111111111' + error = ClientError( + error_response={'Error': {'Code': 'AccessDenied'}}, + operation_name='AssumeRole' + ) + mock_sts.assume_cross_account_role.side_effect = error + + assert not _verify_bootstrap_exists(mock_sts, account_id) + logger.debug.assert_called_once_with( + "Could not assume into %s in %s due to %s", + ADF_TEST_BOOTSTRAP_ROLE_NAME, + account_id, + error, + ) diff --git a/src/lambda_codebase/moved_to_root.py b/src/lambda_codebase/moved_to_root.py index 4b3a0f8cd..0fc3691e6 100644 --- a/src/lambda_codebase/moved_to_root.py +++ b/src/lambda_codebase/moved_to_root.py @@ -21,29 +21,20 @@ LOGGER = configure_logger(__name__) REGION_DEFAULT = os.environ.get('AWS_REGION') +MANAGEMENT_ACCOUNT_ID = os.getenv('MANAGEMENT_ACCOUNT_ID') S3_BUCKET = os.environ.get("S3_BUCKET_NAME") +ADF_PARAM_DESCRIPTION = 'Used by The AWS Deployment Framework' -def worker_thread(sts, region, account_id, role, event): - partition = get_partition(REGION_DEFAULT) - - role = sts.assume_cross_account_role( - f'arn:{partition}:iam::{account_id}:role/{role}', - 'remove_base' - ) +def worker_thread(region, account_id, role, event): parameter_store = ParameterStore(region, role) paginator = parameter_store.client.get_paginator('describe_parameters') page_iterator = paginator.paginate() for page in page_iterator: for parameter in page['Parameters']: - is_adf_param = ( - 'Used by The AWS Deployment Framework' in parameter.get( - 'Description', - '', - ) - ) - if is_adf_param: + description = parameter.get('Description', '') + if ADF_PARAM_DESCRIPTION in description: parameter_store.delete_parameter(parameter.get('Name')) cloudformation = CloudFormation( @@ -59,15 +50,33 @@ def worker_thread(sts, region, account_id, role, event): return cloudformation.delete_all_base_stacks() -def remove_base(account_id, regions, role, event): +def remove_base(account_id, regions, privileged_role_name, event): sts = STS() threads = [] - for region in list(set([event.get('deployment_account_region')] + regions)): + partition = get_partition(REGION_DEFAULT) + + role = sts.assume_bootstrap_deployment_role( + partition, + MANAGEMENT_ACCOUNT_ID, + account_id, + privileged_role_name, + 'remove_base', + ) + + regions = list( + # Set to ensure we only have one of each + set( + # Make sure the deployment_account_region is in the list of + # regions: + [event.get('deployment_account_region')] + + regions + ) + ) + for region in regions: thread = PropagatingThread( target=worker_thread, args=( - sts, region, account_id, role, @@ -91,23 +100,20 @@ def execute_move_action(action, account_id, parameter_store, event): or [] ) - role = parameter_store.fetch_parameter('cross_account_access_role') - return remove_base(account_id, regions, role, event) + privileged_role_name = parameter_store.fetch_parameter( + 'cross_account_access_role', + ) + return remove_base(account_id, regions, privileged_role_name, event) return True def lambda_handler(event, _): parameter_store = ParameterStore(REGION_DEFAULT, boto3) - configuration_options = ast.literal_eval( - parameter_store.fetch_parameter('config') + action = parameter_store.fetch_parameter_accept_not_found( + name='moves/to_root/action', + default_value='safe', ) - to_root_option = list(filter( - lambda option: option.get("name", []) == "to-root", - configuration_options.get('moves') - )) - - action = to_root_option.pop().get('action') account_id = event.get('account_id') execute_move_action(action, account_id, parameter_store, event) diff --git a/src/lambda_codebase/organization/handler.py b/src/lambda_codebase/organization/handler.py index 76a76c723..afa492697 100644 --- a/src/lambda_codebase/organization/handler.py +++ b/src/lambda_codebase/organization/handler.py @@ -32,6 +32,8 @@ def lambda_handler(event, _context, prior_error=err): "StackId": event["StackId"], "Reason": str(prior_error), } + if not event["ResponseURL"].lower().startswith('http'): + raise ValueError('ResponseURL is forbidden') from None with urlopen( Request( event["ResponseURL"], diff --git a/src/lambda_codebase/organization/main.py b/src/lambda_codebase/organization/main.py index 241b6cc6d..6b37ae662 100644 --- a/src/lambda_codebase/organization/main.py +++ b/src/lambda_codebase/organization/main.py @@ -13,6 +13,7 @@ import json import boto3 from cfn_custom_resource import ( # pylint: disable=unused-import + lambda_handler, create, update, delete, diff --git a/src/lambda_codebase/organization_unit/handler.py b/src/lambda_codebase/organization_unit/handler.py index 2019c557a..cfef8d137 100644 --- a/src/lambda_codebase/organization_unit/handler.py +++ b/src/lambda_codebase/organization_unit/handler.py @@ -29,6 +29,8 @@ def lambda_handler(event, _context, prior_error=err): "StackId": event["StackId"], "Reason": str(prior_error), } + if not event["ResponseURL"].lower().startswith('http'): + raise ValueError('ResponseURL is forbidden') from None with urlopen( Request( event["ResponseURL"], diff --git a/src/lambda_codebase/organization_unit/main.py b/src/lambda_codebase/organization_unit/main.py index 52bf9876d..8e9b4017c 100644 --- a/src/lambda_codebase/organization_unit/main.py +++ b/src/lambda_codebase/organization_unit/main.py @@ -14,6 +14,7 @@ import time import boto3 from cfn_custom_resource import ( # pylint: disable=unused-import + lambda_handler, create, update, delete, diff --git a/src/lambda_codebase/wait_until_complete.py b/src/lambda_codebase/wait_until_complete.py index a957ca457..0d5949142 100644 --- a/src/lambda_codebase/wait_until_complete.py +++ b/src/lambda_codebase/wait_until_complete.py @@ -22,6 +22,7 @@ S3_BUCKET = os.environ["S3_BUCKET_NAME"] REGION_DEFAULT = os.environ["AWS_REGION"] +MANAGEMENT_ACCOUNT_ID = os.getenv('MANAGEMENT_ACCOUNT_ID') LOGGER = configure_logger(__name__) @@ -64,9 +65,12 @@ def lambda_handler(event, _): partition = get_partition(REGION_DEFAULT) cross_account_access_role = event.get('cross_account_access_role') - role = sts.assume_cross_account_role( - f'arn:{partition}:iam::{account_id}:role/{cross_account_access_role}', - 'management' + role = sts.assume_bootstrap_deployment_role( + partition, + MANAGEMENT_ACCOUNT_ID, + account_id, + cross_account_access_role, + 'management', ) s3 = S3(REGION_DEFAULT, S3_BUCKET) diff --git a/src/template.yml b/src/template.yml index f772b75dd..f1118633c 100644 --- a/src/template.yml +++ b/src/template.yml @@ -18,13 +18,13 @@ Metadata: Labels: ["adf", "aws-deployment-framework", "multi-account", "cicd", "devops"] HomePageUrl: https://github.com/awslabs/aws-deployment-framework - SemanticVersion: 3.2.0 + SemanticVersion: 4.0.0 SourceCodeUrl: https://github.com/awslabs/aws-deployment-framework Mappings: Metadata: ADF: - Version: 3.2.0 + Version: 4.0.0 Parameters: CrossAccountAccessRoleName: @@ -111,6 +111,54 @@ Parameters: - ERROR - CRITICAL + AllowBootstrappingOfManagementAccount: + Description: >- + Would ADF need to bootstrap the Management Account of your AWS + Organization too? If so, set this to "Yes". + + Only set this to "Yes" if a pipeline will deploy to the management + account. Or if you need some of the bootstrap resources in the + management account too. + + Please be careful: if you plan to set this to "Yes", make sure + that the management account is in a dedicated organization unit + that has bare minimum IAM permissions to deploy. Only grant access + to resource types that are required using least-privilege! + + If you set/leave this at "No", make sure the management organization is + in the root of your AWS Organization structure. Or in a dedicated + organization unit and add the organization unit id to the protected + organization unit list via the (ProtectedOUs) parameter. + + If not, leave at the default of "No". + Valid options are: Yes, No + Type: String + Default: "No" + AllowedValues: + - "Yes" + - "No" + + GrantOrgWidePrivilegedBootstrapAccessUntil: + Description: >- + When set at a date in the future, ADF will use the privileged + cross-account access role to bootstrap the accounts. This is useful + in situations where you are reworking the IAM permissions of the + ADF bootstrap stacks (global-iam.yml). In some cases, setting this + in the future might be required to upgrade ADF to newer versions of + ADF too. If an ADF upgrade requires this, it will be clearly described + in the CHANGELOG.md file and the release notes. + + Leave at the configured default to disable privileged bootstrap + access for all accounts. When the date is in the past, only the AWS + Accounts that are accessible to ADF but are not bootstrapped yet will + be allowed access via the privileged cross-account access role. + + Date time format according to ISO 8601 + https://www.w3.org/TR/NOTE-datetime + Type: String + Default: "1900-12-31T23:59:59Z" + AllowedPattern: "\\d{4}-[0-1]\\d-[0-3]\\dT[0-2]\\d:[0-5]\\d:[0-5]\\d([+-][0-2]\\d:[0-5]\\d|Z)" + Globals: Function: Architectures: @@ -119,6 +167,11 @@ Globals: Runtime: python3.12 Timeout: 300 +Conditions: + CreateCrossAccountAccessRole: !Equals + - !Ref AllowBootstrappingOfManagementAccount + - "Yes" + Resources: BootstrapTemplatesBucketPolicy: Type: AWS::S3::BucketPolicy @@ -126,27 +179,46 @@ Resources: Bucket: !Ref "BootstrapTemplatesBucket" PolicyDocument: Statement: - - Action: - - s3:Get* - - s3:PutReplicationConfiguration - - s3:List* + - Sid: "AllowBootstrapDeployments" + Action: + - s3:GetObject Effect: Allow + Resource: + - !Sub arn:${AWS::Partition}:s3:::${BootstrapTemplatesBucket}/adf-bootstrap/* + Principal: + AWS: "*" Condition: StringEquals: - aws:PrincipalOrgID: !GetAtt Organization.OrganizationId + "aws:PrincipalOrgID": + - !GetAtt Organization.OrganizationId + ArnLike: + "aws:PrincipalArn": + - !Sub "arn:${AWS::Partition}:iam::*:role/${CrossAccountAccessRoleName}" + - !Sub "arn:${AWS::Partition}:iam::*:role/adf/bootstrap/adf-bootstrap-update-deployment-role" + - Sid: "DenyInsecureConnections" + Action: + - "s3:*" + Effect: Deny + Condition: + Bool: + aws:SecureTransport: "false" Resource: - - !GetAtt BootstrapTemplatesBucket.Arn - - !Sub "${BootstrapTemplatesBucket.Arn}/*" + - !Sub arn:${AWS::Partition}:s3:::${BootstrapTemplatesBucket} + - !Sub arn:${AWS::Partition}:s3:::${BootstrapTemplatesBucket}/* Principal: AWS: "*" - - Action: - - s3:PutObject* - Effect: Allow + - Sid: "DenyInsecureTLS" + Action: + - "s3:*" + Effect: Deny + Condition: + NumericLessThan: + "s3:TlsVersion": "1.2" Resource: - - !GetAtt BootstrapTemplatesBucket.Arn - - !Sub "${BootstrapTemplatesBucket.Arn}/*" + - !Sub arn:${AWS::Partition}:s3:::${BootstrapTemplatesBucket} + - !Sub arn:${AWS::Partition}:s3:::${BootstrapTemplatesBucket}/* Principal: - AWS: !Ref AWS::AccountId + AWS: "*" BootstrapArtifactStorageBucket: Type: AWS::S3::Bucket @@ -191,9 +263,11 @@ Resources: RestrictPublicBuckets: true ### Account processing begin - AccountProcessingLambdaRole: + AccountFileProcessingLambdaRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" + RoleName: "adf-account-management-account-file-processing" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -203,11 +277,18 @@ Resources: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" - - AccountProcessingLambdaRolePolicy: + ManagedPolicyArns: + - !Ref LambdaLayerPolicy + - !Ref AccountAccessRolePolicy + - !Ref AccountProcessingLambdaBasePolicy + + AccountFileProcessingLambdaPolicy: + # Added as an IAM Managed Policy to break the circular dependency chain + # This should not be added as a DependsOn on the lambda, by the time objects + # are written in the bucket this policy is in effect already. Type: "AWS::IAM::ManagedPolicy" Properties: - Description: "Policy to allow the account file processing Lambda to perform actions" + Description: "Policy to process accounts as configured in the bucket" PolicyDocument: Version: "2012-10-17" Statement: @@ -222,16 +303,16 @@ Resources: - Effect: "Allow" Action: "s3:ListBucket" Resource: !GetAtt ADFAccountBucket.Arn - - Effect: "Allow" - Action: "states:StartExecution" - Resource: !Ref AccountManagementStateMachine - Effect: "Allow" Action: "s3:GetObject" Resource: !Sub "${ADFAccountBucket.Arn}/*" + - Effect: "Allow" + Action: "states:StartExecution" + Resource: !Ref AccountManagementStateMachine Roles: - - !Ref AccountProcessingLambdaRole + - !Ref AccountFileProcessingLambdaRole - ADFAccountAccessRolePolicy: + AccountAccessRolePolicy: Type: "AWS::IAM::ManagedPolicy" Properties: Description: "Additional policy that allows a lambda to assume the cross account access role" @@ -241,14 +322,10 @@ Resources: - Effect: Allow Action: - "sts:AssumeRole" - Resource: !Sub "arn:${AWS::Partition}:iam::*:role/${CrossAccountAccessRoleName}" - Roles: - - !Ref AccountProcessingLambdaRole - - !Ref GetAccountRegionsFunctionRole - - !Ref DeleteDefaultVPCFunctionRole - - !Ref AccountAliasConfigFunctionRole + Resource: + - !GetAtt CrossAccountJumpRoleArn.Value - ADFAccountProcessingLambdaBasePolicy: + AccountProcessingLambdaBasePolicy: Type: "AWS::IAM::ManagedPolicy" Properties: Description: "Base policy for all ADF account processing lambdas" @@ -263,20 +340,11 @@ Resources: - "xray:PutTelemetryRecords" - "xray:PutTraceSegments" Resource: "*" - Roles: - - !Ref AccountProcessingLambdaRole - - !Ref GetAccountRegionsFunctionRole - - !Ref DeleteDefaultVPCFunctionRole - - !Ref AccountAliasConfigFunctionRole - - !Ref AccountRegionConfigFunctionRole - - !Ref AccountTagConfigFunctionRole - - !Ref AccountOUConfigFunctionRole - - !Ref CreateAccountFunctionRole - - !Ref RegisterAccountForSupportFunctionRole - - StateMachineExecutionRole: + + AccountManagementStateMachineExecutionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -285,7 +353,9 @@ Resources: Service: - states.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-management/" + Condition: + ArnEquals: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:adf-account-management" Policies: - PolicyName: "adf-state-machine-role-policy" PolicyDocument: @@ -308,12 +378,19 @@ Resources: - !GetAtt GetAccountRegionsFunction.Arn - !GetAtt DeleteDefaultVPCFunction.Arn - !GetAtt AccountRegionConfigFunction.Arn + - !GetAtt JumpRoleApplication.Outputs.ManagerFunctionArn AccountFileProcessingFunction: Type: 'AWS::Serverless::Function' Properties: Handler: process_account_files.lambda_handler - Description: "ADF Lambda Function - Account File Processing" + Description: >- + ADF - Account Management - Account File Event Processing. + + Responsible to kick-off the account management state machine. + Triggers when new account configurations were added in the + adf-accounts folder of the aws-deployment-framework-bootstrap + repository. CodeUri: lambda_codebase/account_processing Tracing: Active Layers: @@ -325,9 +402,9 @@ Resources: ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] ADF_LOG_LEVEL: !Ref LogLevel ACCOUNT_MANAGEMENT_STATEMACHINE_ARN: !Ref AccountManagementStateMachine - ADF_ROLE_NAME: !Ref CrossAccountAccessRoleName - FunctionName: AccountFileProcessorFunction - Role: !GetAtt AccountProcessingLambdaRole.Arn + ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME: !Ref CrossAccountAccessRoleName + FunctionName: adf-account-management-file-event-processor + Role: !GetAtt AccountFileProcessingLambdaRole.Arn Events: S3YmlSuffixEvent: Type: S3 @@ -354,9 +431,11 @@ Resources: Metadata: BuildMethod: python3.12 - AccountAliasConfigFunctionRole: + AccountAliasConfigLambdaRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" + RoleName: "adf-account-management-config-account-alias" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -365,13 +444,16 @@ Resources: Service: - lambda.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-management/" + ManagedPolicyArns: + - !Ref LambdaLayerPolicy + - !Ref AccountAccessRolePolicy + - !Ref AccountProcessingLambdaBasePolicy AccountAliasConfigFunction: Type: 'AWS::Serverless::Function' Properties: Handler: configure_account_alias.lambda_handler - Description: "ADF Lambda Function - Account Alias Configuration" + Description: ADF - Account Management - Account Alias Configuration CodeUri: lambda_codebase/account_processing Tracing: Active Layers: @@ -383,15 +465,16 @@ Resources: ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] ADF_LOG_LEVEL: !Ref LogLevel - ADF_ROLE_NAME: !Ref CrossAccountAccessRoleName - FunctionName: AccountAliasConfigurationFunction - Role: !GetAtt AccountAliasConfigFunctionRole.Arn + ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME: !Ref CrossAccountAccessRoleName + FunctionName: adf-account-management-config-alias + Role: !GetAtt AccountAliasConfigLambdaRole.Arn Metadata: BuildMethod: python3.12 AccountTagConfigFunctionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -400,7 +483,9 @@ Resources: Service: - lambda.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-management/" + ManagedPolicyArns: + - !Ref LambdaLayerPolicy + - !Ref AccountProcessingLambdaBasePolicy Policies: - PolicyName: "adf-lambda-tag-resource-policy" PolicyDocument: @@ -410,13 +495,14 @@ Resources: Action: - "organizations:TagResource" - "organizations:UntagResource" - Resource: "*" + Resource: + - !Sub "arn:${AWS::Partition}:organizations::${AWS::AccountId}:account/${Organization.OrganizationId}/*" AccountTagConfigFunction: Type: 'AWS::Serverless::Function' Properties: Handler: configure_account_tags.lambda_handler - Description: "ADF Lambda Function - Account Tag Configuration" + Description: ADF - Account Management - Account Tag Configuration CodeUri: lambda_codebase/account_processing Tracing: Active Layers: @@ -427,7 +513,7 @@ Resources: ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: AccountTagConfigurationFunction + FunctionName: adf-account-management-config-tags Role: !GetAtt AccountTagConfigFunctionRole.Arn Metadata: BuildMethod: python3.12 @@ -435,6 +521,7 @@ Resources: AccountRegionConfigFunctionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -443,7 +530,9 @@ Resources: Service: - lambda.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-management/" + ManagedPolicyArns: + - !Ref LambdaLayerPolicy + - !Ref AccountProcessingLambdaBasePolicy Policies: - PolicyName: "adf-lambda-account-region-resource-policy" PolicyDocument: @@ -464,7 +553,7 @@ Resources: Type: 'AWS::Serverless::Function' Properties: Handler: configure_account_regions.lambda_handler - Description: "ADF Lambda Function - Account region Configuration" + Description: ADF - Account Management - Account Region Configuration CodeUri: lambda_codebase/account_processing Tracing: Active Layers: @@ -475,7 +564,7 @@ Resources: ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: AccountRegionConfigurationFunction + FunctionName: adf-account-management-config-region Role: !GetAtt AccountRegionConfigFunctionRole.Arn Metadata: BuildMethod: python3.12 @@ -484,7 +573,7 @@ Resources: Type: 'AWS::Serverless::Function' Properties: Handler: configure_account_ou.lambda_handler - Description: "ADF Lambda Function - Account OU Configuration" + Description: ADF - Account Management - Account OU Configuration CodeUri: lambda_codebase/account_processing Tracing: Active Layers: @@ -495,7 +584,7 @@ Resources: ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: AccountOUConfigurationFunction + FunctionName: adf-account-management-config-ou Role: !GetAtt AccountOUConfigFunctionRole.Arn Metadata: BuildMethod: python3.12 @@ -503,6 +592,7 @@ Resources: AccountOUConfigFunctionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -511,7 +601,9 @@ Resources: Service: - lambda.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-management/" + ManagedPolicyArns: + - !Ref LambdaLayerPolicy + - !Ref AccountProcessingLambdaBasePolicy Policies: - PolicyName: "adf-lambda-policy-move-ou" PolicyDocument: @@ -531,7 +623,7 @@ Resources: Type: 'AWS::Serverless::Function' Properties: Handler: get_account_regions.lambda_handler - Description: "ADF Lambda Function - Get Default Regions for an account" + Description: ADF - Account Management - Get Default Regions CodeUri: lambda_codebase/account_processing Tracing: Active Layers: @@ -543,15 +635,17 @@ Resources: ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] ADF_LOG_LEVEL: !Ref LogLevel - ADF_ROLE_NAME: !Ref CrossAccountAccessRoleName - FunctionName: GetAccountRegionsFunction - Role: !GetAtt GetAccountRegionsFunctionRole.Arn + ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME: !Ref CrossAccountAccessRoleName + FunctionName: adf-account-management-get-regions + Role: !GetAtt GetAccountRegionsLambdaRole.Arn Metadata: BuildMethod: python3.12 - GetAccountRegionsFunctionRole: + GetAccountRegionsLambdaRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" + RoleName: "adf-account-management-get-account-regions" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -560,13 +654,16 @@ Resources: Service: - lambda.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-management/" + ManagedPolicyArns: + - !Ref LambdaLayerPolicy + - !Ref AccountAccessRolePolicy + - !Ref AccountProcessingLambdaBasePolicy DeleteDefaultVPCFunction: Type: 'AWS::Serverless::Function' Properties: Handler: delete_default_vpc.lambda_handler - Description: "ADF Lambda Function - Delete the default VPC for a region" + Description: ADF - Account Management - Delete the Default VPCs CodeUri: lambda_codebase/account_processing Tracing: Active Layers: @@ -578,15 +675,17 @@ Resources: ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] ADF_LOG_LEVEL: !Ref LogLevel - ADF_ROLE_NAME: !Ref CrossAccountAccessRoleName - FunctionName: DeleteDefaultVPCFunction - Role: !GetAtt DeleteDefaultVPCFunctionRole.Arn + ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME: !Ref CrossAccountAccessRoleName + FunctionName: adf-account-management-delete-default-vpc + Role: !GetAtt DeleteDefaultVPCLambdaRole.Arn Metadata: BuildMethod: python3.12 - DeleteDefaultVPCFunctionRole: + DeleteDefaultVPCLambdaRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" + RoleName: "adf-account-management-delete-default-vpc" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -595,13 +694,16 @@ Resources: Service: - lambda.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-management/" + ManagedPolicyArns: + - !Ref LambdaLayerPolicy + - !Ref AccountAccessRolePolicy + - !Ref AccountProcessingLambdaBasePolicy CreateAccountFunction: Type: 'AWS::Serverless::Function' Properties: Handler: create_account.lambda_handler - Description: "ADF Lambda Function - Create an account" + Description: ADF - Account Management - Create Account CodeUri: lambda_codebase/account_processing Tracing: Active Layers: @@ -612,8 +714,8 @@ Resources: ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] ADF_LOG_LEVEL: !Ref LogLevel - ADF_ROLE_NAME: !Ref CrossAccountAccessRoleName - FunctionName: CreateAccountFunction + ADF_PRIVILEGED_CROSS_ACCOUNT_ROLE_NAME: !Ref CrossAccountAccessRoleName + FunctionName: adf-account-management-create-account Role: !GetAtt CreateAccountFunctionRole.Arn Metadata: BuildMethod: python3.12 @@ -621,6 +723,7 @@ Resources: CreateAccountFunctionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -629,7 +732,9 @@ Resources: Service: - lambda.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-management/" + ManagedPolicyArns: + - !Ref LambdaLayerPolicy + - !Ref AccountProcessingLambdaBasePolicy Policies: - PolicyName: "adf-lambda-create-account-policy" PolicyDocument: @@ -645,7 +750,7 @@ Resources: Type: 'AWS::Serverless::Function' Properties: Handler: register_account_for_support.lambda_handler - Description: "ADF Lambda Function - Registers an account for enterprise support" + Description: ADF - Account Management - Register support level CodeUri: lambda_codebase/account_processing Tracing: Active Layers: @@ -656,7 +761,7 @@ Resources: ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: RegisterAccountForSupportFunction + FunctionName: adf-account-management-register-support-level Role: !GetAtt RegisterAccountForSupportFunctionRole.Arn Metadata: BuildMethod: python3.12 @@ -664,6 +769,7 @@ Resources: RegisterAccountForSupportFunctionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -672,7 +778,9 @@ Resources: Service: - lambda.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-management/" + ManagedPolicyArns: + - !Ref LambdaLayerPolicy + - !Ref AccountProcessingLambdaBasePolicy Policies: - PolicyName: "adf-lambda-support-access-policy" PolicyDocument: @@ -708,7 +816,8 @@ Resources: AccountManagementStateMachine: Type: "AWS::StepFunctions::StateMachine" Properties: - RoleArn: !GetAtt StateMachineExecutionRole.Arn + StateMachineName: "adf-account-management" + RoleArn: !GetAtt AccountManagementStateMachineExecutionRole.Arn TracingConfiguration: Enabled: true DefinitionString: !Sub |- @@ -803,6 +912,31 @@ Resources: "MaxAttempts": 6 } ], + "Next": "EnableBootstrappingJumpRole" + }, + "EnableBootstrappingJumpRole": { + "Type": "Task", + "Resource": "${JumpRoleApplication.Outputs.ManagerFunctionArn}", + "TimeoutSeconds": 300, + "Retry": [ + { + "ErrorEquals": [ + "Lambda.Unknown", + "Lambda.ServiceException", + "Lambda.AWSLambdaException", + "Lambda.SdkClientException", + "Lambda.TooManyRequestsException" + ], + "IntervalSeconds": 2, + "BackoffRate": 2, + "MaxAttempts": 6 + } + ], + "Next": "WaitForRoleUpdateToApply" + }, + "WaitForRoleUpdateToApply": { + "Type": "Wait", + "Seconds": 60, "Next": "ConfigureAccountRegions" }, "ConfigureAccountRegions": { @@ -980,6 +1114,7 @@ Resources: LayerName: adf_shared_layer Metadata: BuildMethod: python3.12 + BuildArchitecture: arm64 LambdaLayerPolicy: Type: "AWS::IAM::ManagedPolicy" @@ -991,84 +1126,79 @@ Resources: - Effect: "Allow" Action: "lambda:GetLayerVersion" Resource: !Ref ADFSharedPythonLambdaLayerVersion - Roles: - - !Ref AccountAliasConfigFunctionRole - - !Ref AccountOUConfigFunctionRole - - !Ref AccountProcessingLambdaRole - - !Ref AccountRegionConfigFunctionRole - - !Ref AccountTagConfigFunctionRole - - !Ref CreateAccountFunctionRole - - !Ref DeleteDefaultVPCFunctionRole - - !Ref GetAccountRegionsFunctionRole - - !Ref LambdaRole - - !Ref RegisterAccountForSupportFunctionRole - - !Ref AccountHandlerFunctionRole - - LambdaRole: - Type: "AWS::IAM::Role" - Properties: - AssumeRolePolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: "Allow" - Principal: - Service: - - "states.amazonaws.com" - - "lambda.amazonaws.com" - Action: - - "sts:AssumeRole" - LambdaPolicy: + CommonLambdaPolicy: Type: "AWS::IAM::ManagedPolicy" Properties: - Description: "Policy to allow Lambda to perform actions" + Description: "Policy to allow Lambda functions to common Lambda resources" PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - - "sts:AssumeRole" - "logs:CreateLogGroup" - "logs:CreateLogStream" - "logs:PutLogEvents" - - "organizations:DescribeOrganizationalUnit" - - "organizations:ListParents" - - "cloudformation:*" - - "iam:GetRole" - - "iam:PassRole" - - "iam:CreateRole" - - "iam:PutRolePolicy" - - "organizations:DescribeOrganization" - - "organizations:DescribeAccount" - - "states:StartExecution" + - "xray:PutTelemetryRecords" + - "xray:PutTraceSegments" Resource: "*" - - Effect: Allow - Action: - - "ssm:DeleteParameter" - - "ssm:DeleteParameters" - - "ssm:GetParameter" - - "ssm:GetParameters" - - "ssm:GetParametersByPath" - - "ssm:PutParameter" - Resource: - - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/*" - - Effect: "Allow" - Action: "s3:ListBucket" - Resource: !GetAtt BootstrapTemplatesBucket.Arn + + ### Account-Bootstrapping Jump Role begin + JumpRoleApplication: + Type: AWS::Serverless::Application + DeletionPolicy: Delete + UpdateReplacePolicy: Retain + Properties: + Location: account_bootstrapping_jump_role.yml + Parameters: + OrganizationId: !GetAtt Organization.OrganizationId + ADFVersion: !FindInMap ['Metadata', 'ADF', 'Version'] + LambdaLayer: !Ref ADFSharedPythonLambdaLayerVersion + CrossAccountAccessRoleName: !Ref CrossAccountAccessRoleName + DeploymentAccountId: !GetAtt DeploymentAccount.AccountId + LogLevel: !Ref LogLevel + GrantOrgWidePrivilegedBootstrapAccessUntil: !Ref GrantOrgWidePrivilegedBootstrapAccessUntil + ### Account-Bootstrapping Jump Role end + + BootstrapStackWaiterLambdaRole: + Type: "AWS::IAM::Role" + Properties: + Path: "/adf/account-bootstrapping/" + RoleName: "adf-account-bootstrapping-bootstrap-stack-waiter" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: - Effect: "Allow" - Action: "s3:GetObject" - Resource: - !Sub "${BootstrapTemplatesBucket.Arn}/*" - Roles: - - !Ref LambdaRole + Principal: + Service: + - "lambda.amazonaws.com" + Action: + - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref CommonLambdaPolicy + Policies: + - PolicyName: "stack-waiter-policies" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Action: + - "sts:AssumeRole" + Resource: + - !GetAtt CrossAccountJumpRoleArn.Value + Condition: + StringEquals: + aws:PrincipalOrgID: !GetAtt Organization.OrganizationId - StackWaiterFunction: + BootstrapStackWaiterFunction: Type: "AWS::Serverless::Function" + DependsOn: + - BootstrapTemplatesBucketPolicy Properties: Handler: wait_until_complete.lambda_handler Layers: - !Ref ADFSharedPythonLambdaLayerVersion - Description: "ADF Lambda Function - StackWaiterFunction" + Description: ADF - Account Bootstrapping - Wait for Stack Environment: Variables: S3_BUCKET_NAME: !Ref BootstrapTemplatesBucket @@ -1077,80 +1207,184 @@ Resources: ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ["Metadata", "ADF", "Version"] ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: StackWaiter - Role: !GetAtt LambdaRole.Arn + FunctionName: adf-account-bootstrapping-wait-for-bootstrap-stack + Role: !GetAtt BootstrapStackWaiterLambdaRole.Arn Metadata: BuildMethod: python3.12 + DetermineEventLambdaRole: + Type: "AWS::IAM::Role" + Properties: + Path: "/adf/account-bootstrapping/" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Principal: + Service: + - "lambda.amazonaws.com" + Action: + - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref CommonLambdaPolicy + Policies: + - PolicyName: "determine-event-policies" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Action: + - "ssm:GetParameter" + - "ssm:GetParameters" + Resource: + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/config" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/cross_account_access_role" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/deployment_account_id" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/deployment_account_region" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/extensions/terraform/enabled" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/target_regions" + - Effect: "Allow" + Action: + - "organizations:DescribeOrganizationalUnit" + - "organizations:DescribeOrganization" + - "organizations:ListParents" + Resource: "*" + DetermineEventFunction: Type: "AWS::Serverless::Function" + DependsOn: + - BootstrapTemplatesBucketPolicy Properties: Handler: determine_event.lambda_handler Layers: - !Ref ADFSharedPythonLambdaLayerVersion - Description: "ADF Lambda Function - DetermineEvent" + Description: ADF - Account Bootstrapping - Determine Event Type Environment: Variables: S3_BUCKET_NAME: !Ref BootstrapTemplatesBucket TERMINATION_PROTECTION: false - DEPLOYMENT_ACCOUNT_BUCKET: !GetAtt SharedModulesBucketName.Value + SHARED_MODULES_BUCKET: !GetAtt SharedModulesBucketName.Value MANAGEMENT_ACCOUNT_ID: !Ref AWS::AccountId ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ["Metadata", "ADF", "Version"] ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: DetermineEventFunction - Role: !GetAtt LambdaRole.Arn + FunctionName: adf-account-bootstrapping-determine-event + Role: !GetAtt DetermineEventLambdaRole.Arn Metadata: BuildMethod: python3.12 - CrossAccountExecuteFunction: + CrossAccountDeployBootstrapLambdaRole: + Type: "AWS::IAM::Role" + Properties: + Path: "/adf/account-bootstrapping/" + RoleName: "adf-account-bootstrapping-cross-account-deploy-bootstrap" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Principal: + Service: + - "lambda.amazonaws.com" + Action: + - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref CommonLambdaPolicy + Policies: + - PolicyName: "cross-account-exec-policies" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Action: + - "sts:AssumeRole" + Resource: + - !GetAtt CrossAccountJumpRoleArn.Value + - Effect: Allow + Action: + - "ssm:GetParameter" + - "ssm:PutParameter" + Resource: + - !Sub "arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/adf/deployment_account_id" + - !Sub "arn:${AWS::Partition}:ssm:${DeploymentAccountMainRegion}:${AWS::AccountId}:parameter/adf/deployment_account_id" + - Effect: "Allow" + Action: "s3:ListBucket" + Resource: !GetAtt BootstrapTemplatesBucket.Arn + - Effect: "Allow" + Action: "s3:GetObject" + Resource: + !Sub "${BootstrapTemplatesBucket.Arn}/*" + + CrossAccountDeployBootstrapFunction: Type: "AWS::Serverless::Function" + DependsOn: + - BootstrapTemplatesBucketPolicy Properties: Handler: account_bootstrap.lambda_handler Layers: - !Ref ADFSharedPythonLambdaLayerVersion - Description: "ADF Lambda Function - CrossAccountExecuteFunction" + Description: >- + ADF - Account Bootstrapping - Cross-Account Deploy Bootstrap Stacks Environment: Variables: S3_BUCKET_NAME: !Ref BootstrapTemplatesBucket TERMINATION_PROTECTION: false - DEPLOYMENT_ACCOUNT_BUCKET: !GetAtt SharedModulesBucketName.Value + SHARED_MODULES_BUCKET: !GetAtt SharedModulesBucketName.Value MANAGEMENT_ACCOUNT_ID: !Ref AWS::AccountId ORGANIZATION_ID: !GetAtt Organization.OrganizationId ADF_VERSION: !FindInMap ["Metadata", "ADF", "Version"] ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: CrossAccountExecuteFunction - Role: !GetAtt LambdaRole.Arn + FunctionName: adf-account-bootstrapping-cross-account-deploy-bootstrap + Role: !GetAtt CrossAccountDeployBootstrapLambdaRole.Arn Timeout: 900 Metadata: BuildMethod: python3.12 - RoleStackDeploymentFunction: - Type: "AWS::Serverless::Function" + MovedToRootCleanupIfRequiredLambdaRole: + Type: "AWS::IAM::Role" Properties: - Handler: deployment_account_config.lambda_handler - Layers: - - !Ref ADFSharedPythonLambdaLayerVersion - Description: "ADF Lambda Function - RoleStackDeploymentFunction" - Environment: - Variables: - S3_BUCKET_NAME: !Ref BootstrapTemplatesBucket - TERMINATION_PROTECTION: false - MANAGEMENT_ACCOUNT_ID: !Ref AWS::AccountId - ADF_VERSION: !FindInMap ["Metadata", "ADF", "Version"] - ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: RoleStackDeploymentFunction - Role: !GetAtt LambdaRole.Arn - Metadata: - BuildMethod: python3.12 + Path: "/adf/account-bootstrapping/" + RoleName: "adf-account-bootstrapping-moved-to-root-cleanup-if-required" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Principal: + Service: + - "lambda.amazonaws.com" + Action: + - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref CommonLambdaPolicy + Policies: + - PolicyName: "moved-to-root-policies" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Action: + - "ssm:GetParameter" + - "ssm:GetParameters" + Resource: + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/moves/to_root/action" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/cross_account_access_role" + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/target_regions" + - Effect: "Allow" + Action: + - "sts:AssumeRole" + Resource: + - !GetAtt CrossAccountJumpRoleArn.Value - MovedToRootActionFunction: + MovedToRootCleanupIfRequiredFunction: Type: "AWS::Serverless::Function" + DependsOn: + - BootstrapTemplatesBucketPolicy Properties: Handler: moved_to_root.lambda_handler Layers: - !Ref ADFSharedPythonLambdaLayerVersion - Description: "ADF Lambda Function - MovedToRootActionFunction" + Description: >- + ADF - Account Bootstrapping - Moved to Root Cleanup Bootstrap Stacks + if required. Environment: Variables: S3_BUCKET_NAME: !Ref BootstrapTemplatesBucket @@ -1158,19 +1392,48 @@ Resources: MANAGEMENT_ACCOUNT_ID: !Ref AWS::AccountId ADF_VERSION: !FindInMap ["Metadata", "ADF", "Version"] ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: MovedToRootActionFunction - Role: !GetAtt LambdaRole.Arn + FunctionName: adf-account-bootstrapping-moved-to-root-cleanup-if-required + Role: !GetAtt MovedToRootCleanupIfRequiredLambdaRole.Arn Timeout: 900 Metadata: BuildMethod: python3.12 - UpdateResourcePoliciesFunction: + UpdateDeploymentResourcePoliciesLambdaRole: + Type: "AWS::IAM::Role" + Properties: + Path: "/adf/account-bootstrapping/" + RoleName: "adf-account-bootstrapping-update-deployment-resource-policies" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Principal: + Service: + - "lambda.amazonaws.com" + Action: + - "sts:AssumeRole" + ManagedPolicyArns: + - !Ref CommonLambdaPolicy + Policies: + - PolicyName: "update-resource-policies" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Action: + - "sts:AssumeRole" + Resource: + - !GetAtt CrossAccountJumpRoleArn.Value + + UpdateDeploymentResourcePoliciesFunction: Type: "AWS::Serverless::Function" + DependsOn: + - BootstrapTemplatesBucketPolicy Properties: Handler: generic_account_config.lambda_handler Layers: - !Ref ADFSharedPythonLambdaLayerVersion - Description: "ADF Lambda Function - UpdateResourcePoliciesFunction" + Description: ADF - Account Bootstrapping - Configure Deployment Target Environment: Variables: S3_BUCKET_NAME: !Ref BootstrapTemplatesBucket @@ -1178,15 +1441,17 @@ Resources: MANAGEMENT_ACCOUNT_ID: !Ref AWS::AccountId ADF_VERSION: !FindInMap ["Metadata", "ADF", "Version"] ADF_LOG_LEVEL: !Ref LogLevel - FunctionName: UpdateResourcePoliciesFunction - Role: !GetAtt LambdaRole.Arn + FunctionName: adf-account-bootstrapping-config-policies-deployment-target + Role: !GetAtt UpdateDeploymentResourcePoliciesLambdaRole.Arn Metadata: BuildMethod: python3.12 - CloudWatchEventsRule: + AccountOUMoveEventsRule: Type: "AWS::Events::Rule" Properties: - Description: Triggers StateMachine on Move OU + Name: "adf-account-bootstrapping-account-ou-move" + Description: >- + Triggers Account Bootstrapping state machine on Account OU move EventPattern: source: - aws.organizations @@ -1200,25 +1465,11 @@ Resources: RoleArn: !GetAtt AccountBootstrapStartExecutionRole.Arn Id: CreateStackLinkedAccountV1 - CodeBuildRole: - Type: AWS::IAM::Role - Properties: - AssumeRolePolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: "Allow" - Principal: - Service: - - "codebuild.amazonaws.com" - Action: - - "sts:AssumeRole" - ManagedPolicyArns: - - !Ref "CodeBuildPolicy" - RoleName: "adf-codebuild-role" - BootstrapCodeBuildRole: Type: AWS::IAM::Role Properties: + Path: "/adf/bootstrap-pipeline/" + RoleName: adf-bootstrap-pipeline-codebuild AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -1228,6 +1479,9 @@ Resources: - "codebuild.amazonaws.com" Action: - "sts:AssumeRole" + Condition: + ArnEquals: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:codebuild:${AWS::Region}:${AWS::AccountId}:project/adf-bootstrap-pipeline-build" ManagedPolicyArns: - !Ref "CodeBuildPolicy" Policies: @@ -1248,9 +1502,13 @@ Resources: PolicyDocument: Version: "2012-10-17" Statement: + - Effect: Allow + Action: + - "sts:AssumeRole" + Resource: + - !GetAtt CrossAccountJumpRoleArn.Value - Effect: "Allow" Action: - - "cloudformation:ListStacks" - "logs:CreateLogGroup" - "logs:CreateLogStream" - "logs:PutLogEvents" @@ -1265,20 +1523,14 @@ Resources: - "organizations:EnablePolicyType" - "organizations:ListAccounts" - "organizations:ListAccountsForParent" - - "organizations:ListOrganizationalUnitsForParent" - "organizations:ListChildren" + - "organizations:ListOrganizationalUnitsForParent" - "organizations:ListParents" - "organizations:ListPolicies" - "organizations:ListPoliciesForTarget" - "organizations:ListRoots" - "organizations:UpdatePolicy" - - "organizations:CreateAccount" - - "organizations:MoveAccount" - - "organizations:DescribeCreateAccountStatus" - - "organizations:TagResource" - "sts:GetCallerIdentity" - - "sts:AssumeRole" - - "cloudformation:ValidateTemplate" Resource: "*" - Effect: Allow Action: @@ -1293,35 +1545,6 @@ Resources: Resource: - !Ref AccountManagementStateMachine - !Ref AccountBootstrappingStateMachine - - Effect: "Allow" - Action: - - "cloudformation:DescribeChangeSet" - - "cloudformation:DeleteStack" - - "cloudformation:CancelUpdateStack" - - "cloudformation:ContinueUpdateRollback" - - "cloudformation:DeleteChangeSet" - - "cloudformation:DescribeStacks" - - "cloudformation:SetStackPolicy" - - "cloudformation:SignalResource" - - "cloudformation:UpdateTerminationProtection" - Resource: - - !Sub "arn:${AWS::Partition}:cloudformation:*:*:stack/adf-global-base-*/*" - - !Sub "arn:${AWS::Partition}:cloudformation:*:*:stack/adf-regional-base-*/*" - - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/adf-global-base-adf-build/*" - - Effect: "Allow" - Action: - - "cloudformation:CreateStack" - - "cloudformation:UpdateStack" - - "cloudformation:CreateChangeSet" - - "cloudformation:CreateUploadBucket" - - "cloudformation:ExecuteChangeSet" - Resource: - - !Sub "arn:${AWS::Partition}:cloudformation:${DeploymentAccountMainRegion}:*:stack/adf-global-base-bootstrap/*" - - !Sub "arn:${AWS::Partition}:cloudformation:${DeploymentAccountMainRegion}:*:stack/adf-global-base-iam/*" - - !Sub "arn:${AWS::Partition}:cloudformation:${DeploymentAccountMainRegion}:${DeploymentAccount.AccountId}:stack/adf-global-base-deployment/*" - - !Sub "arn:${AWS::Partition}:cloudformation:${DeploymentAccountMainRegion}:${AWS::AccountId}:stack/adf-global-base-adf-build/*" - - !Sub "arn:${AWS::Partition}:cloudformation:*:*:stack/adf-regional-base-bootstrap/*" - - !Sub "arn:${AWS::Partition}:cloudformation:*:${DeploymentAccount.AccountId}:stack/adf-regional-base-deployment/*" - Effect: "Allow" Action: - "s3:DeleteObject" @@ -1330,48 +1553,59 @@ Resources: - "s3:ListBucket" - "s3:PutObject" Resource: + - !GetAtt "ADFAccountBucket.Arn" + - !Sub "${ADFAccountBucket.Arn}/*" - !GetAtt "BootstrapTemplatesBucket.Arn" - !Sub "${BootstrapTemplatesBucket.Arn}/*" - - !GetAtt "BootstrapArtifactStorageBucket.Arn" - - !Sub "${BootstrapArtifactStorageBucket.Arn}/*" - !Sub "arn:${AWS::Partition}:s3:::${SharedModulesBucket.BucketName}" - !Sub "arn:${AWS::Partition}:s3:::${SharedModulesBucket.BucketName}/*" - - !GetAtt ADFAccountBucket.Arn - - !Sub "${ADFAccountBucket.Arn}/*" - Effect: "Allow" Action: - - "codebuild:*" - Resource: - # Hardcoded name (instead of !GetAtt CodeBuildProject.Arn) to avoid a circular - # dependency. Converting this to an inline policy can break the circle - - !Sub "arn:${AWS::Partition}:codebuild:${AWS::Region}:${AWS::AccountId}:project/aws-deployment-framework-base-templates" - - Effect: "Allow" - Action: - - "iam:CreatePolicy" - - "iam:CreateRole" - - "iam:DeleteRole" - - "iam:DeleteRolePolicy" - - "iam:GetRole" - - "iam:PutRolePolicy" - - "iam:TagRole" - - "iam:UntagRole" - - "iam:UpdateAssumeRolePolicy" + - "s3:GetBucketPolicy" + - "s3:GetObject" + - "s3:ListBucket" Resource: - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${CrossAccountAccessRoleName}" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${CrossAccountAccessRoleName}-readonly" - - Effect: "Allow" + - !GetAtt "BootstrapArtifactStorageBucket.Arn" + - !Sub "${BootstrapArtifactStorageBucket.Arn}/*" + + OrganizationsReadonlyRole: + Type: AWS::IAM::Role + DependsOn: CleanupLegacyStacks + Properties: + Path: /adf/organizations/ + RoleName: "adf-organizations-readonly" + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Principal: + AWS: + - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccount.AccountId}:root" + Condition: + StringEquals: + "aws:PrincipalOrgID": + - !GetAtt Organization.OrganizationId + ArnEquals: + "aws:PrincipalArn": + - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccount.AccountId}:role/adf-codebuild-role" + - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccount.AccountId}:role/adf/pipeline-management/adf-pipeline-management-generate-inputs" Action: - - "iam:DeleteRole" - - "iam:DeleteRolePolicy" - - "iam:UntagRole" - Resource: - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-automation-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-deployment-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codecommit-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-readonly-automation-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-terraform-role" - - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-update-cross-account-access-role" + - "sts:AssumeRole" + Policies: + - PolicyName: "adf-organizations-readonly-policy" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Action: + - organizations:ListAccounts + - organizations:ListAccountsForParent + - organizations:DescribeAccount + - organizations:ListOrganizationalUnitsForParent + - organizations:ListRoots + - organizations:ListChildren + - tag:GetResources + Resource: "*" CodeCommitRepository: Type: AWS::CodeCommit::Repository @@ -1382,6 +1616,8 @@ Resources: CodeBuildProject: Type: AWS::CodeBuild::Project + DependsOn: + - BootstrapTemplatesBucketPolicy Properties: TimeoutInMinutes: 60 Artifacts: @@ -1403,7 +1639,7 @@ Resources: Value: !Ref ADFAccountBucket - Name: MANAGEMENT_ACCOUNT_ID Value: !Ref AWS::AccountId - - Name: DEPLOYMENT_ACCOUNT_BUCKET + - Name: SHARED_MODULES_BUCKET Value: !GetAtt SharedModulesBucketName.Value - Name: ORGANIZATION_ID Value: !GetAtt Organization.OrganizationId @@ -1411,11 +1647,13 @@ Resources: Value: !Ref LogLevel - Name: ACCOUNT_MANAGEMENT_STATE_MACHINE_ARN Value: !Ref AccountManagementStateMachine + - Name: MAIN_DEPLOYMENT_REGION + Value: !Ref DeploymentAccountMainRegion - Name: ACCOUNT_BOOTSTRAPPING_STATE_MACHINE_ARN Value: !Ref AccountBootstrappingStateMachine Type: LINUX_CONTAINER - Name: "aws-deployment-framework-base-templates" - ServiceRole: !Ref BootstrapCodeBuildRole + Name: "adf-bootstrap-pipeline-build" + ServiceRole: !GetAtt BootstrapCodeBuildRole.Arn Source: BuildSpec: | version: 0.2 @@ -1449,18 +1687,19 @@ Resources: - >- sam package --output-template-file adf-bootstrap/deployment/global.yml + --region $MAIN_DEPLOYMENT_REGION --s3-prefix adf-bootstrap/deployment - --s3-bucket $DEPLOYMENT_ACCOUNT_BUCKET + --s3-bucket $SHARED_MODULES_BUCKET - python adf-build/store_config.py # Shared Modules to be used with AWS CodeBuild: - >- aws s3 sync ./adf-build/shared - s3://$DEPLOYMENT_ACCOUNT_BUCKET/adf-build - --quiet + s3://$SHARED_MODULES_BUCKET/adf-build + --only-show-errors # Base templates: - >- - aws s3 sync . s3://$S3_BUCKET --quiet --delete + aws s3 sync . s3://$S3_BUCKET --only-show-errors --delete # Upload account files to the ACCOUNT_BUCKET - >- python adf-build/shared/helpers/sync_to_s3.py @@ -1478,7 +1717,7 @@ Resources: Type: CODEPIPELINE Tags: - Key: "Name" - Value: "aws-deployment-framework-base-templates" + Value: "adf-bootstrap-pipeline-build" CodePipeline: Type: AWS::CodePipeline::Pipeline @@ -1486,7 +1725,7 @@ Resources: ArtifactStore: Type: S3 Location: !Ref BootstrapArtifactStorageBucket - RoleArn: !GetAtt CodePipelineRole.Arn + RoleArn: !GetAtt BootstrapCodePipelineRole.Arn Name: "aws-deployment-framework-bootstrap-pipeline" Stages: - Name: CodeCommit @@ -1504,6 +1743,19 @@ Resources: RepositoryName: !GetAtt CodeCommitRepository.Name PollForSourceChanges: false RunOrder: 1 + - Name: EnableBootstrappingViaJumpRole + Actions: + - Name: EnableBootstrappingViaJumpRole + ActionTypeId: + Category: Invoke + Owner: AWS + Provider: Lambda + Version: "1" + RunOrder: 1 + Configuration: + FunctionName: !GetAtt JumpRoleApplication.Outputs.ManagerFunctionName + InputArtifacts: [] + OutputArtifacts: [] - Name: UploadAndUpdateBaseStacks Actions: - Name: UploadAndUpdateBaseStacks @@ -1527,11 +1779,25 @@ Resources: } ] RunOrder: 1 + - Name: RestrictBootstrappingViaJumpRole + Actions: + - Name: RestrictBootstrappingViaJumpRole + ActionTypeId: + Category: Invoke + Owner: AWS + Provider: Lambda + Version: "1" + RunOrder: 1 + Configuration: + FunctionName: !GetAtt JumpRoleApplication.Outputs.ManagerFunctionName + InputArtifacts: [] + OutputArtifacts: [] - CodePipelineRole: + BootstrapCodePipelineRole: Type: AWS::IAM::Role Properties: - RoleName: "adf-codepipeline-role" + Path: "/adf/bootstrap-pipeline/" + RoleName: "adf-bootstrap-codepipeline" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -1541,58 +1807,51 @@ Resources: - codepipeline.amazonaws.com Action: - sts:AssumeRole - Path: / - - CodePipelineRolePolicy: - Type: "AWS::IAM::ManagedPolicy" - Properties: - Description: "Policy to allow codepipeline to perform actions" - PolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: "Allow" - Action: - - "codebuild:*" - - "codecommit:*" - - "s3:GetBucketPolicy" - - "s3:GetObject" - - "s3:ListBucket" - - "s3:PutObject" - Resource: "*" - Roles: - - !Ref CodePipelineRole - - OrgEventCodePipelineRole: - Type: AWS::IAM::Role - Properties: - RoleName: "adf-org-event-codepipeline" - AssumeRolePolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow - Principal: - Service: events.amazonaws.com - Action: - - sts:AssumeRole - Path: / - - OrgEventCodePipelinePolicy: - Type: AWS::IAM::Policy - Properties: - PolicyName: "adf-org-event-codepipeline-policy" - PolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: Allow - Action: - - "codepipeline:StartPipelineExecution" - Resource: !Sub "arn:${AWS::Partition}:codepipeline:${AWS::Region}:${AWS::AccountId}:${CodePipeline}" - Roles: - - !Ref OrgEventCodePipelineRole + Condition: + StringEqualsIfExists: + "aws:SourceAccount": !Ref AWS::AccountId + "aws:SourceArn": !Sub "arn:${AWS::Partition}:codepipeline:${AWS::Region}:${AWS::AccountId}:aws-deployment-framework-bootstrap-pipeline" + Policies: + - PolicyName: bootstrap-codepipeline-policies + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Action: + - "s3:GetObject" + - "s3:ListBucket" + - "s3:PutObject" + Resource: + - !GetAtt "BootstrapArtifactStorageBucket.Arn" + - !Sub "${BootstrapArtifactStorageBucket.Arn}/*" + - Effect: Allow + Sid: "CodeBuild" + Action: + - "codebuild:BatchGetBuilds" + - "codebuild:StartBuild" + Resource: + - !GetAtt CodeBuildProject.Arn + - Effect: Allow + Sid: "CodeCommit" + Action: + - "codecommit:GetBranch" + - "codecommit:GetCommit" + - "codecommit:UploadArchive" + - "codecommit:GetUploadArchiveStatus" + - "codecommit:CancelUploadArchive" + Resource: + - !GetAtt CodeCommitRepository.Arn + - Effect: Allow + Sid: "Lambda" + Action: + - "lambda:InvokeFunction" + Resource: + - !GetAtt JumpRoleApplication.Outputs.ManagerFunctionArn - StatesExecutionRole: + AccountBootstrapStateMachineExecutionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-bootstrapping/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -1601,7 +1860,9 @@ Resources: Service: - states.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-bootstrapping/" + Condition: + ArnEquals: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:stateMachine:adf-account-bootstrapping" Policies: - PolicyName: "adf-state-machine-role-policy" PolicyDocument: @@ -1612,15 +1873,16 @@ Resources: - "lambda:InvokeFunction" Resource: - !GetAtt DetermineEventFunction.Arn - - !GetAtt CrossAccountExecuteFunction.Arn - - !GetAtt MovedToRootActionFunction.Arn - - !GetAtt StackWaiterFunction.Arn - - !GetAtt RoleStackDeploymentFunction.Arn - - !GetAtt UpdateResourcePoliciesFunction.Arn + - !GetAtt CrossAccountDeployBootstrapFunction.Arn + - !GetAtt MovedToRootCleanupIfRequiredFunction.Arn + - !GetAtt BootstrapStackWaiterFunction.Arn + - !GetAtt UpdateDeploymentResourcePoliciesFunction.Arn + - !GetAtt JumpRoleApplication.Outputs.ManagerFunctionArn AccountBootstrapStartExecutionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/account-bootstrapping/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -1629,7 +1891,9 @@ Resources: Service: - events.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/account-bootstrapping/" + Condition: + ArnEquals: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:events:${AWS::Region}:${AWS::AccountId}:rule/adf-account-bootstrapping-account-ou-move" Policies: - PolicyName: "adf-start-state-machine" PolicyDocument: @@ -1644,7 +1908,8 @@ Resources: AccountBootstrappingStateMachine: Type: "AWS::StepFunctions::StateMachine" Properties: - RoleArn: !GetAtt StatesExecutionRole.Arn + StateMachineName: "adf-account-bootstrapping" + RoleArn: !GetAtt AccountBootstrapStateMachineExecutionRole.Arn TracingConfiguration: Enabled: true DefinitionString: !Sub |- @@ -1655,7 +1920,27 @@ Resources: "DetermineEvent": { "Type": "Task", "Resource": "${DetermineEventFunction.Arn}", - "Next": "MovedToRootOrProtected?", + "Next": "EnableBootstrappingJumpRole", + "TimeoutSeconds": 300, + "Retry": [ + { + "ErrorEquals": [ + "Lambda.Unknown", + "Lambda.ServiceException", + "Lambda.AWSLambdaException", + "Lambda.SdkClientException", + "Lambda.TooManyRequestsException" + ], + "IntervalSeconds": 2, + "BackoffRate": 2, + "MaxAttempts": 6 + } + ] + }, + "EnableBootstrappingJumpRole": { + "Type": "Task", + "Resource": "${JumpRoleApplication.Outputs.ManagerFunctionArn}", + "Next": "WaitForRoleUpdateToApply", "TimeoutSeconds": 300, "Retry": [ { @@ -1672,6 +1957,11 @@ Resources: } ] }, + "WaitForRoleUpdateToApply": { + "Type": "Wait", + "Seconds": 60, + "Next": "MovedToRootOrProtected?" + }, "MovedToRootOrProtected?": { "Type": "Choice", "Choices": [ @@ -1683,14 +1973,14 @@ Resources: { "Variable": "$.moved_to_root", "NumericEquals": 1, - "Next": "MovedToRootAction" + "Next": "MovedToRootCleanupIfRequired" } ], "Default": "CreateOrUpdateBaseStack" }, "CreateOrUpdateBaseStack": { "Type": "Task", - "Resource": "${CrossAccountExecuteFunction.Arn}", + "Resource": "${CrossAccountDeployBootstrapFunction.Arn}", "Retry": [ { "ErrorEquals": [ @@ -1731,9 +2021,9 @@ Resources: "Next": "WaitUntilBootstrapComplete", "TimeoutSeconds": 900 }, - "MovedToRootAction": { + "MovedToRootCleanupIfRequired": { "Type": "Task", - "Resource": "${MovedToRootActionFunction.Arn}", + "Resource": "${MovedToRootCleanupIfRequiredFunction.Arn}", "Retry": [ { "ErrorEquals": [ @@ -1767,7 +2057,7 @@ Resources: }, "WaitUntilBootstrapComplete": { "Type": "Task", - "Resource": "${StackWaiterFunction.Arn}", + "Resource": "${BootstrapStackWaiterFunction.Arn}", "Retry": [ { "ErrorEquals": ["RetryError"], @@ -1803,34 +2093,14 @@ Resources: { "Variable": "$.is_deployment_account", "NumericEquals": 1, - "Next": "DeploymentAccountConfig" + "Next": "Success" } ], "Default": "ExecuteDeploymentAccountStateMachine" }, - "DeploymentAccountConfig": { - "Type": "Task", - "Resource": "${RoleStackDeploymentFunction.Arn}", - "Retry": [ - { - "ErrorEquals": [ - "Lambda.Unknown", - "Lambda.ServiceException", - "Lambda.AWSLambdaException", - "Lambda.SdkClientException", - "Lambda.TooManyRequestsException" - ], - "IntervalSeconds": 2, - "BackoffRate": 2, - "MaxAttempts": 6 - } - ], - "End": true, - "TimeoutSeconds": 900 - }, "ExecuteDeploymentAccountStateMachine": { "Type": "Task", - "Resource": "${UpdateResourcePoliciesFunction.Arn}", + "Resource": "${UpdateDeploymentResourcePoliciesFunction.Arn}", "Retry": [ { "ErrorEquals": [ @@ -1845,8 +2115,11 @@ Resources: "MaxAttempts": 6 } ], - "End": true, + "Next": "Success", "TimeoutSeconds": 900 + }, + "Success": { + "Type": "Succeed" } } } @@ -1863,7 +2136,9 @@ Resources: Properties: Handler: handler.lambda_handler CodeUri: lambda_codebase/initial_commit/bootstrap_repository/adf-bootstrap/deployment/lambda_codebase/determine_default_branch - Description: "ADF Lambda Function - BootstrapDetermineDefaultBranchName" + Description: !Sub >- + ADF - Installer - Determine the default branch of the + ${CodeCommitRepository.Name} repository. Policies: - Version: "2012-10-17" Statement: @@ -1898,7 +2173,12 @@ Resources: Properties: Handler: handler.lambda_handler CodeUri: lambda_codebase/initial_commit - Description: "ADF Lambda Function - BootstrapCreateInitialCommitFunction" + Description: !Sub >- + ADF - Installer - Initial Commit Bootstrap. + + Creates the initial commit or update PR on the default branch of the + ${CodeCommitRepository.Name} repository. As required to install/update + ADF. Policies: - Version: "2012-10-17" Statement: @@ -1917,6 +2197,8 @@ Resources: SharedModulesBucket: Type: Custom::CrossRegionBucket + DeletionPolicy: Retain + UpdateReplacePolicy: Retain Properties: ServiceToken: !GetAtt CrossRegionBucketHandler.Arn Region: !Ref DeploymentAccountMainRegion @@ -1925,16 +2207,227 @@ Resources: PolicyDocument: Statement: - Action: - - s3:Get* - - s3:List* - - s3:PutObject + - s3:GetObject + Effect: Allow + Resource: + - "{bucket_arn}/adf-bootstrap/*" + - "{bucket_arn}/adf-build/*" + Principal: + AWS: + - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccount.AccountId}:root" + - Action: + - s3:ListBucket + Effect: Allow + Resource: + - "{bucket_arn}" + Principal: + AWS: + - !Sub "arn:${AWS::Partition}:iam::${DeploymentAccount.AccountId}:root" + Condition: + StringLike: + "s3:prefix": + - "adf-bootstrap/*" + - "adf-build/*" + - Action: + - s3:GetObject Effect: Allow + Resource: + - "{bucket_arn}/adf-bootstrap/*" Principal: - AWS: !Sub "arn:${AWS::Partition}:iam::${DeploymentAccount.AccountId}:root" Service: - - codebuild.amazonaws.com - - lambda.amazonaws.com - cloudformation.amazonaws.com + Condition: + StringEquals: + "aws:SourceOrgID": + - !GetAtt Organization.OrganizationId + - Sid: "DenyInsecureConnections" + Action: + - "s3:*" + Effect: Deny + Condition: + Bool: + aws:SecureTransport: "false" + Principal: + AWS: "*" + - Sid: "DenyInsecureTLS" + Action: + - "s3:*" + Effect: Deny + Condition: + NumericLessThan: + "s3:TlsVersion": "1.2" + Principal: + AWS: "*" + + CleanupLegacyStacks: + Type: Custom::CleanupLegacyStacks + Properties: + ServiceToken: !GetAtt CleanupLegacyStacksHandler.Arn + Version: !FindInMap ["Metadata", "ADF", "Version"] + DeploymentAccountRegion: !Ref DeploymentAccountMainRegion + + CleanupLegacyStacksHandler: + Type: AWS::Serverless::Function + Properties: + Handler: handler.lambda_handler + CodeUri: lambda_codebase/cleanup_legacy_stacks + Description: >- + ADF - Installer - Cleanup Legacy Stacks. + + Checks if legacy specific legacy bootstrap stacks exists. + If they do, they are cleaned up automatically. + Layers: + - !Ref ADFSharedPythonLambdaLayerVersion + Environment: + Variables: + MANAGEMENT_ACCOUNT_ID: !Ref AWS::AccountId + DEPLOYMENT_REGION: !Ref DeploymentAccountMainRegion + ADF_VERSION: !FindInMap ['Metadata', 'ADF', 'Version'] + ADF_LOG_LEVEL: !Ref LogLevel + Policies: + - Version: "2012-10-17" + Statement: + - Effect: Allow + Action: + - cloudformation:DescribeStacks + - cloudformation:DeleteStack + Resource: + - !Sub "arn:${AWS::Partition}:cloudformation:${DeploymentAccountMainRegion}:${AWS::AccountId}:stack/adf-global-base-adf-build" + - !Sub "arn:${AWS::Partition}:cloudformation:${DeploymentAccountMainRegion}:${AWS::AccountId}:stack/adf-global-base-adf-build/*" + - Effect: Allow + Action: + - iam:DeleteRole + - iam:DeleteRolePolicy + - iam:UntagRole + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${CrossAccountAccessRoleName}" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/${CrossAccountAccessRoleName}-readonly" + - Effect: "Allow" + Action: "lambda:GetLayerVersion" + Resource: !Ref ADFSharedPythonLambdaLayerVersion + FunctionName: CleanupLegacyStacksFunction + Metadata: + BuildMethod: python3.12 + + OrganizationsRole: + # Only required if you intend to bootstrap the management account. + Type: AWS::IAM::Role + Condition: CreateCrossAccountAccessRole + DependsOn: + - CleanupLegacyStacks + - JumpRoleApplication + Properties: + Path: / + RoleName: !Ref CrossAccountAccessRoleName + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Principal: + AWS: + - !GetAtt CrossAccountJumpRoleArn.Value + Action: + - "sts:AssumeRole" + + OrganizationsPolicy: + # Only required if you intend to bootstrap the management account. + Type: AWS::IAM::Policy + Condition: CreateCrossAccountAccessRole + Properties: + PolicyName: "adf-management-account-bootstrap-policy" + PolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: Allow + Action: + - cloudformation:CancelUpdateStack + - cloudformation:ContinueUpdateRollback + - cloudformation:CreateChangeSet + - cloudformation:CreateStack + - cloudformation:CreateUploadBucket + - cloudformation:DeleteChangeSet + - cloudformation:DeleteStack + - cloudformation:DescribeChangeSet + - cloudformation:DescribeStacks + - cloudformation:ExecuteChangeSet + - cloudformation:ListStacks + - cloudformation:SetStackPolicy + - cloudformation:SignalResource + - cloudformation:UpdateStack + - cloudformation:UpdateTerminationProtection + Resource: + - !Sub "arn:${AWS::Partition}:cloudformation:*:${AWS::AccountId}:stack/*" + - Effect: Allow + Action: + - cloudformation:ValidateTemplate + - ec2:DeleteInternetGateway + - ec2:DeleteNetworkInterface + - ec2:DeleteRouteTable + - ec2:DeleteSubnet + - ec2:DeleteVpc + - ec2:DescribeInternetGateways + - ec2:DescribeNetworkInterfaces + - ec2:DescribeRegions + - ec2:DescribeRouteTables + - ec2:DescribeSubnets + - ec2:DescribeVpcs + - iam:CreateAccountAlias + - iam:DeleteAccountAlias + - iam:ListAccountAliases + Resource: + - "*" + - Effect: Allow + Action: + - ssm:PutParameter + - ssm:GetParameters + - ssm:GetParameter + Resource: + - !Sub "arn:${AWS::Partition}:ssm:*:${AWS::AccountId}:parameter/adf/*" + - Effect: Allow + Action: + - iam:CreateRole + - iam:DeleteRole + - iam:TagRole + - iam:UntagRole + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-automation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codecommit-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-readonly-automation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-terraform-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap/adf-bootstrap-test-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap/adf-bootstrap-update-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap/adf-update-cross-account-access" + - Effect: Allow + Action: + - iam:DeleteRolePolicy + - iam:GetRole + - iam:GetRolePolicy + - iam:PutRolePolicy + - iam:UpdateAssumeRolePolicy + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-automation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-bootstrap-test-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-bootstrap-update-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-cloudformation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-codecommit-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-readonly-automation-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-terraform-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-update-cross-account-access" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap/adf-bootstrap-test-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap/adf-bootstrap-update-deployment-role" + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/bootstrap/adf-update-cross-account-access" + - Effect: "Allow" + Action: + - iam:DeleteRole + - iam:DeleteRolePolicy + - iam:UntagRole + Resource: + - !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf-update-cross-account-access-role" + Roles: + - !Ref OrganizationsRole SharedModulesBucketName: Type: AWS::SSM::Parameter @@ -1952,6 +2445,14 @@ Resources: Type: String Value: !Ref LogLevel + CrossAccountJumpRoleArn: + Type: AWS::SSM::Parameter + Properties: + Description: DO NOT EDIT - Used by The AWS Deployment Framework + Name: /adf/cross_account_jump_role + Type: String + Value: !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/adf/account-bootstrapping/jump/adf-bootstrapping-cross-account-jump-role" + CrossRegionBucketHandler: Type: AWS::Serverless::Function Properties: @@ -1959,19 +2460,26 @@ Resources: CodeUri: lambda_codebase/cross_region_bucket Layers: - !Ref ADFSharedPythonLambdaLayerVersion - Description: "ADF Lambda Function - Create Deployment Bucket in Main Deployment Region" + Description: !Sub >- + ADF - Installer - Create Shared Modules Bucket in + ${DeploymentAccountMainRegion}. Policies: - Version: "2012-10-17" Statement: - Effect: Allow Action: s3:CreateBucket - Resource: "*" + Resource: !Sub "arn:${AWS::Partition}:s3:::adf-shared-modules-*" + Condition: + StringLike: + "s3:LocationConstraint": !Ref DeploymentAccountMainRegion - Effect: Allow Action: - s3:DeleteBucket - - s3:PutEncryptionConfiguration + - s3:PutBucketEncryption + - s3:PutBucketOwnershipControls - s3:PutBucketPolicy - s3:PutBucketPublicAccessBlock + - s3:PutEncryptionConfiguration # This must match BucketNamePrefix of the SharedModulesBucket resource Resource: !Sub "arn:${AWS::Partition}:s3:::adf-shared-modules-*" - Effect: Allow @@ -1998,7 +2506,7 @@ Resources: Properties: Handler: handler.lambda_handler CodeUri: lambda_codebase/organization - Description: "ADF Lambda Function - Enable AWS Organizations" + Description: ADF - Installer - Enable AWS Organizations Policies: - Version: "2012-10-17" Statement: @@ -2028,7 +2536,7 @@ Resources: Properties: Handler: handler.lambda_handler CodeUri: lambda_codebase/organization_unit - Description: "ADF Lambda Function - Create Organization Unit" + Description: ADF - Installer - Manage Deployment Organization Unit Policies: - Version: "2012-10-17" Statement: @@ -2056,6 +2564,7 @@ Resources: AccountHandlerFunctionRole: Type: "AWS::IAM::Role" Properties: + Path: "/adf/installer/deployment-account-management/" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -2064,7 +2573,8 @@ Resources: Service: - lambda.amazonaws.com Action: "sts:AssumeRole" - Path: "/aws-deployment-framework/" + ManagedPolicyArns: + - !Ref LambdaLayerPolicy Policies: - PolicyName: "adf-account-management-access" PolicyDocument: @@ -2095,7 +2605,7 @@ Resources: Properties: Handler: handler.lambda_handler CodeUri: lambda_codebase/account - Description: "ADF Lambda Function - Create Account" + Description: ADF - Installer - Deployment Account Management Role: !GetAtt AccountHandlerFunctionRole.Arn FunctionName: AccountHandler Layers: @@ -2106,6 +2616,7 @@ Resources: PipelineCloudWatchEventRole: Type: AWS::IAM::Role Properties: + Path: "/adf/bootstrap-pipeline/" AssumeRolePolicyDocument: Version: 2012-10-17 Statement: @@ -2114,7 +2625,9 @@ Resources: Service: - events.amazonaws.com Action: sts:AssumeRole - Path: / + Condition: + ArnEquals: + "aws:SourceArn": !Sub "arn:${AWS::Partition}:events:${AWS::Region}:${AWS::AccountId}:rule/adf-bootstrap-pipeline-watch-repo" Policies: - PolicyName: adf-bootstrap-execute-cwe PolicyDocument: @@ -2125,8 +2638,9 @@ Resources: Resource: !Sub "arn:${AWS::Partition}:codepipeline:${AWS::Region}:${AWS::AccountId}:${CodePipeline}" PipelineCloudWatchEventRule: - Type: AWS::Events::Rule + Type: "AWS::Events::Rule" Properties: + Name: "adf-bootstrap-pipeline-watch-repo" EventPattern: source: - aws.codecommit diff --git a/tox.ini b/tox.ini index bfa0bf282..17bcbbe1e 100644 --- a/tox.ini +++ b/tox.ini @@ -21,7 +21,7 @@ setenv= CODEBUILD_BUILD_ID=abcdef S3_BUCKET=some_bucket S3_BUCKET_NAME=some_bucket - DEPLOYMENT_ACCOUNT_BUCKET=some_deployment_account_bucket + SHARED_MODULES_BUCKET=some_shared_modules_bucket MANAGEMENT_ACCOUNT_ID=123 ADF_VERSION=1.0.0 ADF_LOG_LEVEL=CRITICAL