Skip to content

Commit

Permalink
Merge pull request #40 from kunduso/verify-access
Browse files Browse the repository at this point in the history
Create AWS cloud resources to access the ElastiCache cluster from Amazon EC2 instances.
  • Loading branch information
kunduso authored Dec 13, 2023
2 parents 1ab35de + 2e6c5a5 commit 75aeafe
Show file tree
Hide file tree
Showing 7 changed files with 292 additions and 9 deletions.
26 changes: 18 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,28 @@
[![terraform-infra-provisioning](https://github.com/kunduso/amazon-elasticache-redis-tf/actions/workflows/terraform.yml/badge.svg?branch=main)](https://github.com/kunduso/amazon-elasticache-redis-tf/actions/workflows/terraform.yml)[![checkov-static-analysis-scan](https://github.com/kunduso/amazon-elasticache-redis-tf/actions/workflows/code-scan.yml/badge.svg?branch=main)](https://github.com/kunduso/amazon-elasticache-redis-tf/actions/workflows/code-scan.yml)


![Image](https://skdevops.files.wordpress.com/2023/10/85-image-0-1.png)
![Image](https://skdevops.files.wordpress.com/2023/12/87-image-0-1.png)
# Motivation
Amazon ElastiCache service supports Redis and Memcached. If you want in an in-memory caching solution for your application, check out the [AWS-Docs](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html). In this repository I have the Terraform code to provision an Amazon ElastiCache for Redis cluster and all the supporting infrastructure components like Amazon VPC, subnets, security group, AWS KMS key, and AWS Secrets Manager secret.
<br />The process of provisioning is automated using GitHub Actions. I also followed a few best practices while creating the Amazon ElastiCache service, like enabling multi-availability zone, multi-node, and encryption in transit and at rest.
Amazon ElastiCache service supports Redis and Memcached. If you want in an in-memory caching solution for your application, check out the [AWS-Docs](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html). In this repository I cover **two use cases.**

<br />I discussed the concept in detail in my notes at [-create-an-amazon-elasticache-for-redis-cluster-using-terraform](https://skundunotes.com/2023/10/21/create-an-amazon-elasticache-for-redis-cluster-using-terraform/).
<br />**Use-Case 1:** Create an Amazon ElastiCache for Redis cluster using Terraform, and
<br />**Use-Case 2:** Create an Amazon ElastiCache for Redis cluster and Amazon EC2 instances to access the cluster using Terraform.

<br />I used Bridgecrew Checkov to scan the Terraform code for security vulnerabilities. Here is a link if you are interested in adding code scanning capabilities to your GitHub Actions pipeline [-automate-terraform-configuration-scan-with-checkov-and-github-actions](https://skundunotes.com/2023/04/12/automate-terraform-configuration-scan-with-checkov-and-github-actions/).
<br />I also used Infracost to generate a cost estimate of building the architecture. If you want to learn more about adding Infracost estimates to your repository, head over to this note [-estimate AWS Cloud resource cost with Infracost, Terraform, and GitHub Actions](https://skundunotes.com/2023/07/17/estimate-aws-cloud-resource-cost-with-infracost-terraform-and-github-actions/).
<br />Lastly, I also automated the process of provisioning the resources using GitHub Actions pipeline and I discussed that in detail at [-CI-CD with Terraform and GitHub Actions to deploy to AWS](https://skundunotes.com/2023/03/07/ci-cd-with-terraform-and-github-actions-to-deploy-to-aws/).
<br />If you are interested in Use-case 1, please refer to the [create-amazon-elasticache branch.](https://github.com/kunduso/amazon-elasticache-redis-tf/tree/create-amazon-elasticache)

For Use-case 2, this repository has the Terraform code to provision an Amazon ElastiCache for Redis cluster and all the supporting infrastructure components like Amazon VPC, subnets, security group, AWS KMS key, and AWS Secrets Manager secret. It also has addition AWS cloud resources like:
<br />- an **internet gateway** and update the path in the route table attached to the **public subnet**
<br />- an **IAM instance profile** and attach an **IAM role** with the two existing **IAM policies** to read from the **SSM parameter store** and **AWS Secrets manager**. These resources have the ElastiCache endpoint and auth_token stored that was created in Use-case 1.
<br />- two **Amazon EC2 instances** in the public subnet with separate user data scripts to install **Python libraries** and create Python files inside the instances.
<br />The process of provisioning is automated using **GitHub Actions**.

<br />I discussed the concept in detail in my notes at [-Connect to an Amazon ElastiCache cluster from an Amazon EC2 instance using Python](https://skundunotes.com/2023/12/13/connect-to-an-amazon-elasticache-cluster-from-an-amazon-ec2-instance-using-python/).

<br />I used **Bridgecrew Checkov** to scan the Terraform code for security vulnerabilities. Here is a link if you are interested in adding code scanning capabilities to your GitHub Actions pipeline [-automate-terraform-configuration-scan-with-checkov-and-github-actions](https://skundunotes.com/2023/04/12/automate-terraform-configuration-scan-with-checkov-and-github-actions/).
<br />I also used **Infracost** to generate a cost estimate of building the architecture. If you want to learn more about adding Infracost estimates to your repository, head over to this note [-estimate AWS Cloud resource cost with Infracost, Terraform, and GitHub Actions](https://skundunotes.com/2023/07/17/estimate-aws-cloud-resource-cost-with-infracost-terraform-and-github-actions/).
<br />Lastly, I also automated the process of provisioning the resources using **GitHub Actions** pipeline and I discussed that in detail at [-CI-CD with Terraform and GitHub Actions to deploy to AWS](https://skundunotes.com/2023/03/07/ci-cd-with-terraform-and-github-actions-to-deploy-to-aws/).
## Prerequisites
For this code to function without errors, I created an OpenID connect identity provider in Amazon Identity and Access Management that has a trust relationship with this GitHub repository. You can read about it [here](https://skundunotes.com/2023/02/28/securely-integrate-aws-credentials-with-github-actions-using-openid-connect/) to get a detailed explanation with steps.
For this code to function without errors, I created an **OpenID connect** identity provider in **Amazon Identity and Access Management** that has a trust relationship with this GitHub repository. You can read about it [here](https://skundunotes.com/2023/02/28/securely-integrate-aws-credentials-with-github-actions-using-openid-connect/) to get a detailed explanation with steps.
<br />I stored the ARN of the IAM Role as a GitHub secret which is referred in the [`terraform.yml`](https://github.com/kunduso/amazon-elasticache-redis-tf/blob/eb148db2b9ff37cff9f1fb469d0c14b6479bd57a/.github/workflows/terraform.yml#L42) file.
<br />Since I used Infracost in this repository, I stored the `INFRACOST_API_KEY` as a repository secret. It is referenced in the [`terraform.yml`](https://github.com/kunduso/amazon-elasticache-redis-tf/blob/eb148db2b9ff37cff9f1fb469d0c14b6479bd57a/.github/workflows/terraform.yml#L52) GitHub actions workflow file.
<br />As part of the Infracost integration, I also created a `INFRACOST_API_KEY` and stored that as a GitHub Actions secret. I also managed the cost estimate process using a GitHub Actions variable `INFRACOST_SCAN_TYPE` where the value is either `hcl_code` or `tf_plan`, depending on the type of scan desired.
Expand Down
101 changes: 101 additions & 0 deletions ec2.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
resource "aws_internet_gateway" "this-igw" {
vpc_id = aws_vpc.this.id
tags = {
"Name" = "app-4-gateway"
}
}
resource "aws_route" "internet-route" {
destination_cidr_block = "0.0.0.0/0"
route_table_id = aws_route_table.public.id
gateway_id = aws_internet_gateway.this-igw.id
}
# create a security group
resource "aws_security_group" "ec2_instance" {
name = "app-4-ec2"
description = "Allow inbound to and outbound access from the Amazon EC2 instance."
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [var.vpc_cidr]
description = "Enable access from any resource inside the VPC."
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
description = "Enable access to the internet."
}
vpc_id = aws_vpc.this.id
}

#create an EC2 in a public subnet
data "aws_ami" "amazon_ami" {
filter {
name = "name"
values = var.ami_name
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
most_recent = true
owners = ["amazon"]
}
resource "aws_instance" "app-server-read" {
instance_type = var.instance_type
ami = data.aws_ami.amazon_ami.id
vpc_security_group_ids = [aws_security_group.ec2_instance.id]
iam_instance_profile = aws_iam_instance_profile.ec2_profile.name
associate_public_ip_address = true
#checkov:skip=CKV_AWS_88: Required for Session Manager access
subnet_id = aws_subnet.public[0].id
ebs_optimized = true
monitoring = true
root_block_device {
encrypted = true
}
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
}
tags = {
Name = "app-4-server-read"
}
user_data = templatefile("user_data/read_elasticache.tpl",
{
Region = var.region,
elasticache_ep = aws_ssm_parameter.elasticache_ep.name,
elasticache_ep_port = aws_ssm_parameter.elasticache_port.name,
elasticache_auth_token = aws_secretsmanager_secret.elasticache_auth.name
})
}
resource "aws_instance" "app-server-write" {
instance_type = var.instance_type
ami = data.aws_ami.amazon_ami.id
vpc_security_group_ids = [aws_security_group.ec2_instance.id]
iam_instance_profile = aws_iam_instance_profile.ec2_profile.name
associate_public_ip_address = true
#checkov:skip=CKV_AWS_88: Required for Session Manager access
subnet_id = aws_subnet.public[0].id
ebs_optimized = true
monitoring = true
root_block_device {
encrypted = true
}
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
}
tags = {
Name = "app-4-server-write"
}
user_data = templatefile("user_data/write_elasticache.tpl",
{
Region = var.region,
elasticache_ep = aws_ssm_parameter.elasticache_ep.name,
elasticache_ep_port = aws_ssm_parameter.elasticache_port.name,
elasticache_auth_token = aws_secretsmanager_secret.elasticache_auth.name
})
}
41 changes: 41 additions & 0 deletions ec2_role.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# #https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role
resource "aws_iam_role" "ec2_role" {
name = "app-4-ec2-role"

# Terraform's "jsonencode" function converts a
# Terraform expression result to valid JSON syntax.
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
#Attach role to policy
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment
resource "aws_iam_role_policy_attachment" "custom" {
role = aws_iam_role.ec2_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}

resource "aws_iam_role_policy_attachment" "ssm_policy_attachement" {
role = aws_iam_role.ec2_role.name
policy_arn = aws_iam_policy.ssm_parameter_policy.arn
}
resource "aws_iam_role_policy_attachment" "secret_policy_attachement" {
role = aws_iam_role.ec2_role.name
policy_arn = aws_iam_policy.secret_manager_policy.arn
}
#Attach role to an instance profile
#https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile
resource "aws_iam_instance_profile" "ec2_profile" {
name = "app-4-ec2-profile"
role = aws_iam_role.ec2_role.name
}
1 change: 1 addition & 0 deletions random.tf
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
#https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth.html#auth-overview
#https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password
resource "random_password" "auth" {
length = 128
special = true
Expand Down
58 changes: 58 additions & 0 deletions user_data/read_elasticache.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
#!/bin/bash
yum update -y
yum install python-pip -y
yum install python3 -y
pip3 install redis-py-cluster
pip3 install boto3
pip3 install botocore
echo "The region value is ${Region}"
AWS_REGION=${Region}
local_elasticache_ep=${elasticache_ep}
local_auth_token=${elasticache_auth_token}
local_elasticache_ep_port=${elasticache_ep_port}
cat <<EOF >> /var/read_cache.py
from rediscluster import RedisCluster
from botocore.exceptions import ClientError
import logging
import boto3
def main():
session = boto3.Session(region_name='$AWS_REGION')
auth_token = get_secret(session)
elasticache_endpoint = get_elasticache_endpoint(session)
elasticache_port = get_elasticache_port(session)
read_from_redis_cluster(elasticache_endpoint, elasticache_port, auth_token)
def get_secret(session):
secret_client = session.client('secretsmanager')
try:
get_secret_value_response = secret_client.get_secret_value(
SecretId='$local_auth_token'
)
except ClientError as e:
raise e
return get_secret_value_response
def get_elasticache_endpoint(session):
ssm_client = session.client('ssm')
return ssm_client.get_parameter(
Name='$local_elasticache_ep', WithDecryption=True)
def get_elasticache_port(session):
ssm_client = session.client('ssm')
return ssm_client.get_parameter(
Name='$local_elasticache_ep_port', WithDecryption=True)
def read_from_redis_cluster(endpoint, port, auth):
logging.basicConfig(level=logging.INFO)
redis = RedisCluster(startup_nodes=[{
"host": endpoint['Parameter']['Value'],
"port": port['Parameter']['Value']}],
decode_responses=True,skip_full_coverage_check=True,ssl=True,
password=auth['SecretString'])
if redis.ping():
logging.info("Connected to Redis")
print("The city name entered is "+redis.get("City"))
redis.close()
main()
EOF
64 changes: 64 additions & 0 deletions user_data/write_elasticache.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
#!/bin/bash
yum update -y
yum install python-pip -y
yum install python3 -y
pip3 install redis-py-cluster
pip3 install boto3
pip3 install botocore
echo "The region value is ${Region}"
AWS_REGION=${Region}
local_elasticache_ep=${elasticache_ep}
local_auth_token=${elasticache_auth_token}
local_elasticache_ep_port=${elasticache_ep_port}
cat <<EOF >> /var/write_cache.py
from rediscluster import RedisCluster
import logging
import boto3
import sys
def main():
CityName = input("Enter a City Name: ")
session = boto3.Session(region_name='$AWS_REGION')
auth_token = get_secret(session)
elasticache_endpoint = get_elasticache_endpoint(session)
elasticache_port = get_elasticache_port(session)
write_into_redis_cluster(
elasticache_endpoint,
elasticache_port,
auth_token,
CityName)
def get_secret(session):
secret_client = session.client('secretsmanager')
try:
get_secret_value_response = secret_client.get_secret_value(
SecretId='$local_auth_token'
)
except ClientError as e:
raise e
return get_secret_value_response
def get_elasticache_endpoint(session):
ssm_client = session.client('ssm')
return ssm_client.get_parameter(
Name='$local_elasticache_ep', WithDecryption=True)
def get_elasticache_port(session):
ssm_client = session.client('ssm')
return ssm_client.get_parameter(
Name='$local_elasticache_ep_port', WithDecryption=True)
def write_into_redis_cluster(endpoint, port, auth, cityname):
logging.basicConfig(level=logging.INFO)
redis = RedisCluster(startup_nodes=[{
"host": endpoint['Parameter']['Value'],
"port": port['Parameter']['Value']}],
decode_responses=True,skip_full_coverage_check=True,ssl=True,
password=auth['SecretString'])
if redis.ping():
logging.info("Connected to Redis")
redis.set('City', cityname)
print("The city name entered is updated in the Redis cache cluster.")
redis.close()
main()
EOF
10 changes: 9 additions & 1 deletion variable.tf
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,15 @@ variable "subnet_cidr_public" {
default = ["10.20.32.96/27"]
type = list(any)
}

variable "ami_name" {
description = "The ami name of the image from where the instances will be created"
default = ["amzn2-ami-amd-hvm-2.0.20230727.0-x86_64-gp2"]
type = list(string)
}
variable "instance_type" {
description = "The instance type of the EC2 instances"
default = "t3.medium"
}
variable "replication_group_id" {
description = "The name of the ElastiCache replication group."
default = "app-4-redis-cluster"
Expand Down

0 comments on commit 75aeafe

Please sign in to comment.