Skip to content

Commit

Permalink
Move GKE node group modules into main terraform project (#87)
Browse files Browse the repository at this point in the history
  • Loading branch information
lumberbaron authored Nov 28, 2024
1 parent 43c6bd8 commit 2e9730d
Show file tree
Hide file tree
Showing 14 changed files with 366 additions and 261 deletions.
12 changes: 9 additions & 3 deletions gke/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,10 +161,16 @@ Create a Storage Class with these recommended settings:
kubectl apply -f kubernetes/storage-class.yaml
```

## Breaking Changes
## Changelog

### v1 to v2
### v2

The v2 version of this terraform project introduces a breaking change in the way that the secondary CIDRs are configured for services and pods in the clusters. In the v1 project, this was done via the cluster itself but there are limitations in the size of the CIDRs that make it impossible to run very small GKE clusters. The v2 project updates this to create secondary ranges directly in the cluster's subnetwork, which provides the flexibility to tailor the ranges to support smaller clusters that are more efficient in their use of IPs.
#### Breaking Changes

The v2 version of this Terraform project introduces a breaking change in the way that the secondary CIDRs are configured for services and pods in the clusters. In the v1 project, this was done via the cluster itself but there are limitations in the size of the CIDRs that make it impossible to run very small GKE clusters. The v2 project updates this to create secondary ranges directly in the cluster's subnetwork, which provides the flexibility to tailor the ranges to support smaller clusters that are more efficient in their use of IPs.
The impact of this change is that v1-based clusters cannot be migrated easily to v2 clusters.
#### Other Changes
The v2 version of this Terraform project has moved the use of the node pool modules from the cluster module to the main project.
13 changes: 11 additions & 2 deletions gke/terraform/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@

## Providers

No providers.
| Name | Version |
|------|---------|
| <a name="provider_google"></a> [google](#provider\_google) | ~> 6.0 |

## Modules

Expand All @@ -17,10 +19,17 @@ No providers.
| <a name="module_bastion"></a> [bastion](#module\_bastion) | ./modules/bastion | n/a |
| <a name="module_cluster"></a> [cluster](#module\_cluster) | ./modules/cluster | n/a |
| <a name="module_network"></a> [network](#module\_network) | ./modules/network | n/a |
| <a name="module_node_pool_monitoring"></a> [node\_pool\_monitoring](#module\_node\_pool\_monitoring) | ./modules/broker-node-pool | n/a |
| <a name="module_node_pool_prod100k"></a> [node\_pool\_prod100k](#module\_node\_pool\_prod100k) | ./modules/broker-node-pool | n/a |
| <a name="module_node_pool_prod10k"></a> [node\_pool\_prod10k](#module\_node\_pool\_prod10k) | ./modules/broker-node-pool | n/a |
| <a name="module_node_pool_prod1k"></a> [node\_pool\_prod1k](#module\_node\_pool\_prod1k) | ./modules/broker-node-pool | n/a |
| <a name="module_node_pool_system"></a> [node\_pool\_system](#module\_node\_pool\_system) | ./modules/system-node-pool | n/a |

## Resources

No resources.
| Name | Type |
|------|------|
| [google_compute_zones.available](https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/compute_zones) | data source |

## Inputs

Expand Down
182 changes: 175 additions & 7 deletions gke/terraform/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -48,17 +48,185 @@ module "cluster" {

master_ipv4_cidr_block = var.master_ipv4_cidr_block

max_pods_per_node_system = var.max_pods_per_node_system
max_pods_per_node_messaging = var.max_pods_per_node_messaging
node_pool_max_size = var.node_pool_max_size

kubernetes_api_public_access = var.kubernetes_api_public_access
kubernetes_api_authorized_networks = var.create_bastion && var.create_network ? concat(var.kubernetes_api_authorized_networks, [var.network_cidr_range]) : var.kubernetes_api_authorized_networks

network_name = var.create_network ? module.network.network_name : var.network_name
subnetwork_name = var.create_network ? module.network.subnetwork_name : var.subnetwork_name

secondary_range_name_services = var.create_network ? module.network.secondary_range_name_services : var.secondary_range_name_services
secondary_range_name_pods = var.create_network ? module.network.secondary_cidr_range_name_pods : var.secondary_range_name_pods
secondary_range_name_messaging_pods = var.create_network ? module.network.secondary_range_name_messaging_pods : var.secondary_range_name_messaging_pods
secondary_range_name_services = var.create_network ? module.network.secondary_range_name_services : var.secondary_range_name_services
secondary_range_name_pods = var.create_network ? module.network.secondary_cidr_range_name_pods : var.secondary_range_name_pods
}

################################################################################
# Node Pools
################################################################################

locals {
system_machine_type = "n2-standard-2"
prod1k_machine_type = "n2-highmem-2"
prod10k_machine_type = "n2-highmem-4"
prod100k_machine_type = "n2-highmem-8"
monitoring_machine_type = "e2-standard-2"
}

data "google_compute_zones" "available" {}

module "node_pool_system" {
source = "./modules/system-node-pool"

region = var.region
cluster_name = module.cluster.cluster_name
common_labels = var.common_labels
node_pool_name = "system"
kubernetes_version = module.cluster.master_version
availability_zones = data.google_compute_zones.available.names

worker_node_machine_type = local.system_machine_type
worker_node_service_account = module.cluster.worker_node_service_account

max_pods_per_node = var.max_pods_per_node_system
node_pool_size = 1
}

module "node_pool_prod1k" {
source = "./modules/broker-node-pool"

region = var.region
cluster_name = module.cluster.cluster_name
common_labels = var.common_labels
node_pool_name = "prod1k"
availability_zones = data.google_compute_zones.available.names
kubernetes_version = module.cluster.master_version

secondary_range_name = var.create_network ? module.network.secondary_range_name_messaging_pods : var.secondary_range_name_messaging_pods

worker_node_machine_type = local.prod1k_machine_type
worker_node_service_account = module.cluster.worker_node_service_account

max_pods_per_node = var.max_pods_per_node_messaging
node_pool_max_size = var.node_pool_max_size

node_pool_labels = {
nodeType = "messaging"
serviceClass = "prod1k"
}

node_pool_taints = [
{
key = "nodeType"
value = "messaging"
effect = "NO_EXECUTE"
},
{
key = "serviceClass"
value = "prod1k"
effect = "NO_EXECUTE"
}
]
}

module "node_pool_prod10k" {
source = "./modules/broker-node-pool"

region = var.region
cluster_name = module.cluster.cluster_name
common_labels = var.common_labels
node_pool_name = "prod10k"
availability_zones = data.google_compute_zones.available.names
kubernetes_version = module.cluster.master_version

secondary_range_name = var.create_network ? module.network.secondary_range_name_messaging_pods : var.secondary_range_name_messaging_pods

worker_node_machine_type = local.prod10k_machine_type
worker_node_service_account = module.cluster.worker_node_service_account

max_pods_per_node = var.max_pods_per_node_messaging
node_pool_max_size = var.node_pool_max_size

node_pool_labels = {
nodeType = "messaging"
serviceClass = "prod10k"
}

node_pool_taints = [
{
key = "nodeType"
value = "messaging"
effect = "NO_EXECUTE"
},
{
key = "serviceClass"
value = "prod10k"
effect = "NO_EXECUTE"
}
]
}

module "node_pool_prod100k" {
source = "./modules/broker-node-pool"

region = var.region
cluster_name = module.cluster.cluster_name
common_labels = var.common_labels
node_pool_name = "prod100k"
availability_zones = data.google_compute_zones.available.names
kubernetes_version = module.cluster.master_version

secondary_range_name = var.create_network ? module.network.secondary_range_name_messaging_pods : var.secondary_range_name_messaging_pods

worker_node_machine_type = local.prod100k_machine_type
worker_node_service_account = module.cluster.worker_node_service_account

max_pods_per_node = var.max_pods_per_node_messaging
node_pool_max_size = var.node_pool_max_size

node_pool_labels = {
nodeType = "messaging"
serviceClass = "prod100k"
}

node_pool_taints = [
{
key = "nodeType"
value = "messaging"
effect = "NO_EXECUTE"
},
{
key = "serviceClass"
value = "prod100k"
effect = "NO_EXECUTE"
}
]
}

module "node_pool_monitoring" {
source = "./modules/broker-node-pool"

region = var.region
cluster_name = module.cluster.cluster_name
common_labels = var.common_labels
node_pool_name = "monitoring"
availability_zones = data.google_compute_zones.available.names
kubernetes_version = module.cluster.master_version

secondary_range_name = var.create_network ? module.network.secondary_range_name_messaging_pods : var.secondary_range_name_messaging_pods

worker_node_machine_type = local.monitoring_machine_type
worker_node_service_account = module.cluster.worker_node_service_account

max_pods_per_node = var.max_pods_per_node_messaging
node_pool_max_size = var.node_pool_max_size

node_pool_labels = {
nodeType = "monitoring"
}

node_pool_taints = [
{
key = "nodeType"
value = "monitoring"
effect = "NO_EXECUTE"
}
]
}
3 changes: 1 addition & 2 deletions gke/terraform/modules/broker-node-pool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,12 +33,11 @@ No modules.
| <a name="input_max_pods_per_node"></a> [max\_pods\_per\_node](#input\_max\_pods\_per\_node) | The maximum number of pods per worker node for the node pool. | `number` | n/a | yes |
| <a name="input_node_pool_labels"></a> [node\_pool\_labels](#input\_node\_pool\_labels) | Kubernetes labels added to worker nodes in the node pool. | `map(string)` | n/a | yes |
| <a name="input_node_pool_max_size"></a> [node\_pool\_max\_size](#input\_node\_pool\_max\_size) | The maximum number of worker nodes for the node pool. | `string` | n/a | yes |
| <a name="input_node_pool_name"></a> [node\_pool\_name](#input\_node\_pool\_name) | The name prefix the node pool. | `string` | n/a | yes |
| <a name="input_node_pool_name"></a> [node\_pool\_name](#input\_node\_pool\_name) | The name the node pool. | `string` | n/a | yes |
| <a name="input_node_pool_taints"></a> [node\_pool\_taints](#input\_node\_pool\_taints) | Kubernetes taints added to worker nodes in the node pool. | `list(map(string))` | n/a | yes |
| <a name="input_region"></a> [region](#input\_region) | n/a | `string` | n/a | yes |
| <a name="input_secondary_range_name"></a> [secondary\_range\_name](#input\_secondary\_range\_name) | The name of the secondary CIDR range for the node pool. | `string` | n/a | yes |
| <a name="input_worker_node_machine_type"></a> [worker\_node\_machine\_type](#input\_worker\_node\_machine\_type) | The machine type used for the worker nodes in this node pool. | `string` | n/a | yes |
| <a name="input_worker_node_oauth_scopes"></a> [worker\_node\_oauth\_scopes](#input\_worker\_node\_oauth\_scopes) | The OAuth scopes that will be assigned to the worker nodes in this node pool. | `list(string)` | n/a | yes |
| <a name="input_worker_node_service_account"></a> [worker\_node\_service\_account](#input\_worker\_node\_service\_account) | The service account that will be assigned to the worker nodes in this node pool. | `string` | n/a | yes |

## Outputs
Expand Down
8 changes: 7 additions & 1 deletion gke/terraform/modules/broker-node-pool/main.tf
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
locals {
worker_node_oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}

resource "google_container_node_pool" "this" {
name = var.node_pool_name
location = var.region
Expand All @@ -15,7 +21,7 @@ resource "google_container_node_pool" "this" {
node_config {
machine_type = var.worker_node_machine_type
image_type = "UBUNTU_CONTAINERD" #checkov:skip=CKV_GCP_22:Ubuntu is required for XFS support
oauth_scopes = var.worker_node_oauth_scopes
oauth_scopes = local.worker_node_oauth_scopes
service_account = var.worker_node_service_account
resource_labels = var.common_labels

Expand Down
7 changes: 1 addition & 6 deletions gke/terraform/modules/broker-node-pool/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ variable "common_labels" {

variable "node_pool_name" {
type = string
description = "The name prefix the node pool."
description = "The name the node pool."
}

variable "availability_zones" {
Expand All @@ -28,11 +28,6 @@ variable "worker_node_machine_type" {
description = "The machine type used for the worker nodes in this node pool."
}

variable "worker_node_oauth_scopes" {
type = list(string)
description = "The OAuth scopes that will be assigned to the worker nodes in this node pool."
}

variable "worker_node_service_account" {
type = string
description = "The service account that will be assigned to the worker nodes in this node pool."
Expand Down
19 changes: 6 additions & 13 deletions gke/terraform/modules/cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,21 +14,14 @@

## Modules

| Name | Source | Version |
|------|--------|---------|
| <a name="module_node_group_monitoring"></a> [node\_group\_monitoring](#module\_node\_group\_monitoring) | ../broker-node-pool | n/a |
| <a name="module_node_group_prod100k"></a> [node\_group\_prod100k](#module\_node\_group\_prod100k) | ../broker-node-pool | n/a |
| <a name="module_node_group_prod10k"></a> [node\_group\_prod10k](#module\_node\_group\_prod10k) | ../broker-node-pool | n/a |
| <a name="module_node_group_prod1k"></a> [node\_group\_prod1k](#module\_node\_group\_prod1k) | ../broker-node-pool | n/a |
No modules.

## Resources

| Name | Type |
|------|------|
| [google_container_cluster.cluster](https://registry.terraform.io/providers/hashicorp/google/6.10.0/docs/resources/container_cluster) | resource |
| [google_container_node_pool.system](https://registry.terraform.io/providers/hashicorp/google/6.10.0/docs/resources/container_node_pool) | resource |
| [google_service_account.cluster](https://registry.terraform.io/providers/hashicorp/google/6.10.0/docs/resources/service_account) | resource |
| [google_compute_zones.available](https://registry.terraform.io/providers/hashicorp/google/6.10.0/docs/data-sources/compute_zones) | data source |
| [google_container_engine_versions.this](https://registry.terraform.io/providers/hashicorp/google/6.10.0/docs/data-sources/container_engine_versions) | data source |

## Inputs
Expand All @@ -41,18 +34,18 @@
| <a name="input_kubernetes_api_public_access"></a> [kubernetes\_api\_public\_access](#input\_kubernetes\_api\_public\_access) | When set to true, the Kubernetes API is accessible publicly from the provided authorized networks. | `bool` | `false` | no |
| <a name="input_kubernetes_version"></a> [kubernetes\_version](#input\_kubernetes\_version) | The kubernetes version to use. Only used a creation time, ignored once the cluster exists. | `string` | n/a | yes |
| <a name="input_master_ipv4_cidr_block"></a> [master\_ipv4\_cidr\_block](#input\_master\_ipv4\_cidr\_block) | The CIDR used to assign IPs to the Kubernetes API endpoints. | `string` | n/a | yes |
| <a name="input_max_pods_per_node_messaging"></a> [max\_pods\_per\_node\_messaging](#input\_max\_pods\_per\_node\_messaging) | The maximum number of pods per worker node for the messaging node pools. | `number` | `8` | no |
| <a name="input_max_pods_per_node_system"></a> [max\_pods\_per\_node\_system](#input\_max\_pods\_per\_node\_system) | The maximum number of pods per worker node for the system node pool. | `number` | `16` | no |
| <a name="input_network_name"></a> [network\_name](#input\_network\_name) | The name of the network where the cluster will reside. | `string` | n/a | yes |
| <a name="input_node_pool_max_size"></a> [node\_pool\_max\_size](#input\_node\_pool\_max\_size) | The maximum number of worker nodes for the messaging node pools. | `string` | `20` | no |
| <a name="input_project"></a> [project](#input\_project) | The GCP project that the cluster will reside in. | `string` | n/a | yes |
| <a name="input_region"></a> [region](#input\_region) | The GCP region that the cluster will reside in. | `string` | n/a | yes |
| <a name="input_secondary_range_name_messaging_pods"></a> [secondary\_range\_name\_messaging\_pods](#input\_secondary\_range\_name\_messaging\_pods) | The name of the secondary CIDR range for the cluster's messaging node pools, if provided. | `string` | `null` | no |
| <a name="input_secondary_range_name_pods"></a> [secondary\_range\_name\_pods](#input\_secondary\_range\_name\_pods) | The name of the secondary CIDR range for the cluster's node pools. If a separate CIDR range is provided for messaging pods, this range will be used for just the system (default) node pool. | `string` | n/a | yes |
| <a name="input_secondary_range_name_services"></a> [secondary\_range\_name\_services](#input\_secondary\_range\_name\_services) | The name of the secondary CIDR range for the cluster's services. | `string` | n/a | yes |
| <a name="input_subnetwork_name"></a> [subnetwork\_name](#input\_subnetwork\_name) | The name of the subnetwork where the cluster will reside. | `string` | n/a | yes |

## Outputs

No outputs.
| Name | Description |
|------|-------------|
| <a name="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name) | n/a |
| <a name="output_master_version"></a> [master\_version](#output\_master\_version) | n/a |
| <a name="output_worker_node_service_account"></a> [worker\_node\_service\_account](#output\_worker\_node\_service\_account) | n/a |
<!-- END_TF_DOCS -->
Loading

0 comments on commit 2e9730d

Please sign in to comment.