This is meant to serve as a base configuration to get started with developing Terraform for AWS. An example environment folder for a dev
environment has also been created.
- AWS Terraform App Template
Click "Expand for more details", to see more details on the files and folders that are contained in this repository.
Expand for more details.
1.2.1. The following Terraform files have been provided. Some will require updating. See below for instructions.:
main.tf
- This is where the bulk of your Terraform configuration can be placed, including resources, data sources, and any locals needed to simplify your configuration logic.
versions.tf
- This is where all Terraform/provider configuration is placed. The general versions and a generic AWS provider config have already been placed in this file.
variables.tf
- This is where any input variables required by your Terraform should be placed.
outputs.tf
- This is where any values that you'd like to output from your Terraform configuration should be placed.
./env/dev/backend.tf
- This controls where your Terraform state is stored in AWS S3. You'll need to update the file accordingly for your environment.
- There are prerequisites to get the S3 Terraform backend to work with AWS S3, detailed here: Setting up Terraform S3 backend with AWS S3
./env/dev/terraform.tfvars
- This houses your input values for each environment.
- When you add more environments (stage, prod, etc), create a new folder within the ./env/ directory with your environment name and copy this file and the
backend.tf
file there, updating values as necessary for the new environment.
.github/workflow/README.md
- This readme provides details on the active workflows and the
sample_workflows
- This readme provides details on the active workflows and the
.github/workflow/build_app.yaml
- This workflow will validate and build/deploy an AWS application defined in the Terraform configuration in this repo. It is triggered manually by the user and will prompt for the Terraform 'Action', app environment name, and the runner. These input variables are meant as an example and can be modified to fit the needs of the Terraform configuration in this repo.
.github/workflow/destroy_app.yaml
- This workflow will destroy the app infrastructure using Terraform. It is triggered manually by the user and will prompt for the app environment name, and the runner. These input variables are meant as an example and can be modified to fit the needs of the Terraform configuration in this repo.
.github/workflow/terraform_state_verify.yaml
- This workflow will refresh the state of the app infrastructure via
terraform plan/apply -refresh-only
. It is triggered manually like the previous two workflows.
- This workflow will refresh the state of the app infrastructure via
.github/workflow/sample_workflows/
- This folder contains sample workflows that are triggered by repository events. See the README.md for more details.
The main.tf
file uses two HUIT modules, "constants" and "metadata" that are used to enforce IaC standards and to provide cloud account details used to build cloud resources. It also provides usage information. Click "Expand for more details", to get more information on these modules.
Expand for more details.
Module call:
module "metadata" {
source = "artifactory.huit.harvard.edu/cloudarch-terraform-virtual__aws-modules/aws_metadata/aws"
version = ">= v1.3.9" # This is just to ensure the latest version will be pulled when the app is first set up.
# We recommend updating this so that only patch version upgrades are pulled in the future.
# Product Context Variables
account_name = var.account_name
# account_env = var.account_env
product_name = var.product_name
product_name_short = var.product_name_short
product_environment = var.product_environment
product_environment_short = var.product_environment_short
product_asset_id = var.product_asset_id
product_context = var.product_context
# Set to the defual shared valued in CF Export
shared_values_prefix = var.shared_values_prefix
}
Structure of the vpc_config
output:
vpc_config = {
az_count = 2
global_sg_all = "sg-xxxxxxxx"
global_sg_lm_web = "sg-xxxxxxxx"
subnets = {
level4 = {
app = [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
]
db = [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
]
elbpriv = [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
]
elbpub = [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
]
}
standard = {
app = [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
]
db = [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
]
elbpriv = [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
]
elbpub = [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
]
}
}
vpc_id = "vpc-xxxxxxxx"
}
Example Usage when calling the AWS Metadata Module:
# Retrieve the whole VCN Config and OCID from the metadata module
vpc_config = module.metadata.vpc_config
vpc_id = module.metadata.vpc_config.vpc_id
# Retrieve the AZ Count and Global SGs from the metadata module
az_count = module.metadata.vpc_config.az_count
global_sg_all = module.metadata.vpc_config.global_sg_all
global_sg_lm_web = module.metadata.vpc_config.global_sg_lm_web
# NOTE: The below subnet calls are meant as examples -- please feel free to use whichever method works best for your use case and delete the ones you don't need.
# Retrieve standard subnets directly from the metadata module
standard_app_subnet = module.metadata.vpc_config.subnets.standard.app
standard_db_subnet = module.metadata.vpc_config.subnets.standard.db
standard_elbpriv_subnet = module.metadata.vpc_config.subnets.standard.elbpriv
standard_elbpub_subnet = module.metadata.vpc_config.subnets.standard.elbpub
# Retrieve the level4 subnets directly from the metadata module
level4_app_subnet = module.metadata.vpc_config.subnets.level4.app
level4_db_subnet = module.metadata.vpc_config.subnets.level4.db
level4_elbpriv_subnet = module.metadata.vpc_config.subnets.level4.elbpriv
level4_elbpub_subnet = module.metadata.vpc_config.subnets.level4.elbpub
# Retrieve the Subnet IDs more dynamically from the metadata module using the product context variable.
elbpub_subnet = module.metadata.vpc_config.subnets["${var.product_context}"].elbpub
elbpriv_subnet = module.metadata.vpc_config.subnets["${var.product_context}"].elbpriv
app_subnet = module.metadata.vpc_config.subnets["${var.product_context}"].app
db_subnet = module.metadata.vpc_config.subnets["${var.product_context}"].db
# Retrieve the Subnet ID of a single tier by use of the tier variable. This is useful if you want to create multiple resources in the same tier.
tier_subnet = module.metadata.vpc_config.subnets["${var.product_context}"]["${var.tier}"]
Module call:
module "constants" {
source = "artifactory.huit.harvard.edu/cloudarch-terraform-virtual__aws-modules/aws_constants/aws"
version = ">= v1.0.7" # This is just to ensure the latest version will be pulled when the app is first set up.
# We recommend updating this so that only patch version upgrades are pulled in the future.
# Product Context Variables
product_name = var.product_name
product_name_short = var.product_name_short
product_environment = var.product_environment
product_environment_short = var.product_environment_short
product_asset_id = var.product_asset_id
product_context = var.product_context
}
Output structure for values
output:
values = {
"default_region" = "us-east-1"
"default_tags" = {
"backup_policy" = "11PM_DAILY"
"criticality" = "Non-Critical"
"data_class" = "nonlevel4"
"environment" = "dev"
"hosted_by" = "Not-Defined"
"huit_assetid" = -1
"managed_by" = "Terraform"
"product" = "terraform"
}
}
Output structure for default_tags
output:
default_tags = {
"backup_policy" = "11PM_DAILY"
"criticality" = "Non-Critical"
"data_class" = "nonlevel4"
"environment" = "dev"
"hosted_by" = "Not-Defined"
"huit_assetid" = -1
"managed_by" = "Terraform"
"product" = "terraform"
}
This repository includes Active and Sample GitHub Action Workflows. The following section contains information on how to configure and use them. Click "Expand for more details", to see more information on the provided GitHub Action Workflows
Expand for more details.
There are two workflows included in this base repository: build_app.yaml
and destroy_app.yaml
under .github/workflows/
. These each are triggered off of the workflow_dispatch
event, meaning that you
have to manually trigger them by going to the Actions tab at the top of your repo. These both require an input for the environment_name
field to control what environment the workflow executes against. These are meant to be used as examples, and will likely need updated to tailor them to your Terraform configuration.
Addition sample workflows are provided under the .github/workflows/sample_workflows/
folder, and these workflows operate in a more automated fashion compared to the workflows mentioned in the previous paragraph.
Additional sample Github Actions have been provided for alternative workflows for deploying and managing AWS infrastructure. Information on sample GitHub Actions can be found here
Prerequisites: You will need a service account in AWS with API keys and customer secret keys provisioned and stored correctly in an environment in your repo. These credentials specified as file contents such as:
- Create a new Environment in your repository settings to store required secrets, detailed in the table below.
GitHub Environment Secrets | Expected Content |
---|---|
AWS_ASSUMED_ROLE | Obtained from AWS IAM |
- At the Repository, create the following Repository variables:
GitHub Repository Variables | Expected Content |
---|---|
AWS_REGION | us-east-1 |
- In order for any GitHub Action workflow to run, you'll need to enable GitHub Actions under your repository settings in Actions --> General.
- If needed/desired, update the input variables at the top of the workflow to match your needs.
- Expose the necessary Terraform variables to the necessary workflows/jobs/steps as described within the workflow files.
- Consult the following documents for more information:
A script called tflocal
has been included for convenience, if the need arises to execute your Terraform configuration locally. To use the script:
Usage:
./tflocal <env> terraform <terraform operation + options>
Note: This script will run any command that comes after the <env> argument.
Examples:
./tflocal <env> terraform plan
./tflocal <env> terraform init -reconfigure
Name | Version |
---|---|
terraform | >= 1.12.0 |
aws | >=5.0 |
null | >= 3.2.2 |
Name | Version |
---|---|
null | >= 3.2.2 |
Name | Source | Version |
---|---|---|
constants | artifactory.huit.harvard.edu/cloudarch-terraform-virtual__aws-modules/aws_constants/aws | >= v1.0.21 |
metadata | artifactory.huit.harvard.edu/cloudarch-terraform-virtual__aws-modules/aws_metadata/aws | >= v2.0.9 |
Name | Type |
---|---|
null_resource.do_nothing | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
composite_name | Used as an override to support pre-existing resource names that don't fit the new naming scheme. | string |
"" |
no |
product_asset_id | The HUIT asset id associated to this product. This value must be procured before stack creation in order to insure accurate billing. | number |
-1 |
no |
product_context | The security context applied to this product. This determines which subnets CloudFormation will deploy the product into and if it creates various resources to meet a given context's compliance requirements, e.g. KMS. | string |
"standard" |
no |
product_criticality | (Optional) The criticality of this product. This value must correspond with the product rating as listed in ServiceNow -> HUIT Config -> Application. The default is 'Non-Critical'. | string |
"Non-Critical" |
no |
product_data_class | (Optional) The data class of the data stored in this product. This value is generally calculated by subnet placement - e.g. in a level4 subnet. | string |
"nonlevel4" |
no |
product_environment | Which stage of development is this stack being provisioned for (long form, e.g. 'Development')? | string |
n/a | yes |
product_environment_short | Which stage of development is this stack being provisioned for (short form, e.g. 'dev')? | string |
n/a | yes |
product_hosted_by | (Optional) The HUIT hosted by group. This is used by reporting and operations e.g. DevOps-APT12, DevOps-APT3, etc. This value must be set if managed by DevOps. The default is 'Not-Defined'. | string |
"Not-Defined" |
no |
product_name | Identifies this project. This name is used within various resource's 'Name' tag, DNS entries, to determine S3 locations, and so on. | string |
n/a | yes |
product_name_short | Identifies this project. This Short name is used within various resource's 'Name' tag, DNS entries, to determine S3 locations, and so on. | string |
n/a | yes |
shared_values_prefix | The prefix to use for the SharedValues CloudFormation imports. The default "SharedValues" value will utilize the CS1 SharedValues exports. The following examples show you how to use a CS2 SharedValues export: Example: SharedValues-cloudhacks-dev Example: SharedValues-hup-prod |
string |
n/a | yes |
tier | The application tier being deployed, e.g. app, db, lb_pub, or lb_priv. | string |
"app" |
no |
Name | Description |
---|---|
vpc_config | n/a |