This Terraform project provisions a 3-tier VPC architecture with private EKS cluster deployment, supporting multi-AZ, NAT gateway, Internet Gateway, EIP, and necessary IAM roles, IRSA, and RBAC mappings for secure Kubernetes access.
- CIDR Block:
10.10.0.0/16
- Public Subnets (for NAT Gateway / IGW)
- Private Subnets (App/EKS)
- DB Subnets (optional or future use)
- Spread across 3 Availability Zones.
- Internet Gateway (attached to Public Subnet)
- NAT Gateway (1 per AZ for high availability)
- Elastic IPs (for NAT)
- VPC Peering (optional: to Default VPC for access)
aws eks update-kubeconfig --name dev-eks-cluster
- EKS Cluster Role and Node IAM Role created
- RBAC via
aws-auth
ConfigMap system:masters
access given to selected IAM roles
You can deploy these via Terraform or kubectl
after cluster provisioning:
- Amazon VPC CNI
- CoreDNS
- KubeProxy
- EBS CSI Driver
To access private EKS cluster:
- You can connect from another VPC (like default) if peering is enabled and routing is properly configured
- You don’t need a public EC2 if you access from peered VPC
To safely cancel a Terraform run:
- Use
Ctrl+C
- Then run:
terraform destroy -var-file=env-dev/main.tfvars
terraform init -backend-config=env-dev/state.tfvars
terraform plan -var-file=env-dev/main.tfvars
terraform apply -var-file=env-dev/main.tfvars
To destroy:
terraform destroy -var-file=env-dev/main.tfvars
database-infra
|── ansible/
│ ├── roles
| | |-mongo
| | |-common
| | |-mysql
| | |-rabbitmq
| | |-redis
| | |-vault
| | |-grafana
│ └── playbook.yml
├── env-dev/
│ ├── main.tfvars # Environment-specific input variables
│ └── state.tfvars # Backend config for storing state remotely (e.g., in S3)
├── modules/
│ ├──dns/
| | ├── main.tf
│ | └── variables.tf
│ ├──iam-rule/
| | ├── main.tf
| | ├── output.tf
│ | └── variables.tf
│ └── security-group/
| | ├── data.tf
| | ├── main.tf
| | ├── output.tf
│ | └── variables.tf
│ └── ec2-instance/
| ├── data.tf
| ├── main.tf
| ├── output.tf
│ └── variables.tf
├
├── main.tf
├── variables.tf
├── outputs.tf
└── README.md
eks-aws
|
├── env-dev/
│ ├── main.tfvars # Environment-specific input variables
│ └── state.tfvars # Backend config for storing state remotely (e.g., in S3)
├── modules/
│ ├──eks-iam-access
| |
| ├── data.tf
| ├── main.tf
| ├── output.tf
│ └── variables.tf
├
├── main.tf
├── variables.tf
├── outputs.tf
└── README.md
vpc-eks-aws-infra
|
├── env-dev/
│ ├── main.tfvars # Environment-specific input variables
│ └── state.tfvars # Backend config for storing state remotely (e.g., in S3)
├── modules/
│ ├──vpc/
| ├── igw.tf
| ├── ngw.tf
| ├── route-tables.tf
| ├── subnet.tf
| ├── vpc.tf
│ └── variables.tf
├
├── main.tf
├── variables.tf
├── outputs.tf
└── README.md
- Ensure your runner/EC2 has an IAM role with admin permissions.
- Check subnet AZ distribution; EKS requires subnets in at least 2 AZs.
- EKS authentication and authorization are two separate things.
- Add-on delays are often caused by networking issues (e.g., missing NAT or IAM policy for IRSA).
Feel free to open issues or raise discussions for improvements or bugs.
MIT license.