Skip to content
This repository has been archived by the owner on Mar 29, 2023. It is now read-only.

Commit

Permalink
Merge pull request #22 from gruntwork-io/private_cluster
Browse files Browse the repository at this point in the history
Private cluster
  • Loading branch information
autero1 authored Apr 10, 2019
2 parents f5566e3 + cf71d5f commit eda8942
Show file tree
Hide file tree
Showing 13 changed files with 423 additions and 74 deletions.
8 changes: 7 additions & 1 deletion examples/gke-basic-tiller/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ terraform {
required_version = ">= 0.10.3"
}

# ---------------------------------------------------------------------------------------------------------------------
# PREPARE PROVIDERS
# ---------------------------------------------------------------------------------------------------------------------

provider "google" {
version = "~> 2.3.0"
project = "${var.project}"
Expand Down Expand Up @@ -74,7 +78,9 @@ module "gke_cluster" {
cluster_secondary_range_name = "${google_compute_subnetwork.main.secondary_ip_range.0.range_name}"
}

# Deploy a Node Pool
# ---------------------------------------------------------------------------------------------------------------------
# CREATE A NODE POOL
# ---------------------------------------------------------------------------------------------------------------------

resource "google_container_node_pool" "node_pool" {
provider = "google-beta"
Expand Down
2 changes: 1 addition & 1 deletion examples/gke-basic-tiller/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# ---------------------------------------------------------------------------------------------------------------------

variable "project" {
description = "The name of the GCP Project where all resources will be launched."
description = "The project ID where all resources will be launched."
}

variable "location" {
Expand Down
10 changes: 6 additions & 4 deletions examples/gke-private-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,12 @@ Currently, you cannot use a proxy to reach the cluster master of a regional clus

## How do you run these examples?

1. Install [Terraform](https://www.terraform.io/).
1. Make sure you have Python installed (version 2.x) and in your `PATH`.
1. Open `variables.tf`, and fill in any required variables that don't have a
default.
1. Install [Terraform](https://learn.hashicorp.com/terraform/getting-started/install.html) v0.10.3 or later.
1. Open `variables.tf` and fill in any required variables that don't have a default.
1. Run `terraform get`.
1. Run `terraform plan`.
1. If the plan looks good, run `terraform apply`.
1. To setup `kubectl` to access the deployed cluster, run `gcloud beta container clusters get-credentials $CLUSTER_NAME
--region $REGION --project $PROJECT`, where `CLUSTER_NAME`, `REGION` and `PROJECT` correspond to what you set for the
input variables.

162 changes: 162 additions & 0 deletions examples/gke-private-cluster/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY A GKE PRIVATE CLUSTER IN GOOGLE CLOUD
# This is an example of how to use the gke-cluster module to deploy a public Kubernetes cluster in GCP
# ---------------------------------------------------------------------------------------------------------------------

# Use Terraform 0.10.x so that we can take advantage of Terraform GCP functionality as a separate provider via
# https://github.com/terraform-providers/terraform-provider-google
terraform {
required_version = ">= 0.10.3"
}

# ---------------------------------------------------------------------------------------------------------------------
# PREPARE PROVIDERS
# ---------------------------------------------------------------------------------------------------------------------

provider "google" {
version = "~> 2.3.0"
project = "${var.project}"
region = "${var.region}"
}

provider "google-beta" {
version = "~> 2.3.0"
project = "${var.project}"
region = "${var.region}"
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY A PRIVATE CLUSTER IN GOOGLE CLOUD
# ---------------------------------------------------------------------------------------------------------------------

module "gke_cluster" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "git::[email protected]:gruntwork-io/gke-cluster.git//modules/gke-cluster?ref=v0.0.4"
source = "../../modules/gke-cluster"

name = "${var.cluster_name}"

project = "${var.project}"
location = "${var.location}"
network = "${google_compute_network.main.name}"
subnetwork = "${google_compute_subnetwork.main.self_link}"

# When creating a private cluster, the 'master_ipv4_cidr_block' has to be defined and the size must be /28
master_ipv4_cidr_block = "10.5.0.0/28"

# This setting will make the cluster private
enable_private_nodes = "true"

# To make testing easier, we keep the public endpoint available. In production, we highly recommend restricting access to only within the network boundary, requiring your users to use a bastion host or VPN.
disable_public_endpoint = "false"

# With a private cluster, it is highly recommended to restrict access to the cluster master
# However, for testing purposes we will allow all inbound traffic.
master_authorized_networks_config = [{
cidr_blocks = [{
cidr_block = "0.0.0.0/0"
display_name = "all-for-testing"
}]
}]

cluster_secondary_range_name = "${google_compute_subnetwork.main.secondary_ip_range.0.range_name}"
}

# ---------------------------------------------------------------------------------------------------------------------
# CREATE A NODE POOL
# ---------------------------------------------------------------------------------------------------------------------

resource "google_container_node_pool" "node_pool" {
provider = "google-beta"

name = "private-pool"
project = "${var.project}"
location = "${var.location}"
cluster = "${module.gke_cluster.name}"

initial_node_count = "1"

autoscaling {
min_node_count = "1"
max_node_count = "5"
}

management {
auto_repair = "true"
auto_upgrade = "true"
}

node_config {
image_type = "COS"
machine_type = "n1-standard-1"

labels = {
private-pools-example = "true"
}

tags = ["private-pool-example"]
disk_size_gb = "30"
disk_type = "pd-standard"
preemptible = false

service_account = "${module.gke_service_account.email}"

oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform",
]
}

lifecycle {
ignore_changes = ["initial_node_count"]
}

timeouts {
create = "30m"
update = "30m"
delete = "30m"
}
}

# ---------------------------------------------------------------------------------------------------------------------
# CREATE A CUSTOM SERVICE ACCOUNT TO USE WITH THE GKE CLUSTER
# ---------------------------------------------------------------------------------------------------------------------

module "gke_service_account" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "git::[email protected]:gruntwork-io/gke-cluster.git//modules/gke-service-account?ref=v0.0.1"
source = "../../modules/gke-service-account"

name = "${var.cluster_service_account_name}"
project = "${var.project}"
description = "${var.cluster_service_account_description}"
}

# ---------------------------------------------------------------------------------------------------------------------
# CREATE A NETWORK TO DEPLOY THE CLUSTER TO
# ---------------------------------------------------------------------------------------------------------------------

# TODO(rileykarson): Add proper VPC network config once we've made a VPC module
resource "random_string" "suffix" {
length = 4
special = false
upper = false
}

resource "google_compute_network" "main" {
name = "${var.cluster_name}-network-${random_string.suffix.result}"
auto_create_subnetworks = "false"
}

resource "google_compute_subnetwork" "main" {
name = "${var.cluster_name}-subnetwork-${random_string.suffix.result}"
ip_cidr_range = "10.3.0.0/17"
region = "${var.region}"
network = "${google_compute_network.main.self_link}"

secondary_ip_range {
range_name = "private-cluster-pods"
ip_cidr_range = "10.4.0.0/18"
}
}
22 changes: 22 additions & 0 deletions examples/gke-private-cluster/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
output "cluster_endpoint" {
description = "The IP address of the cluster master."
sensitive = true
value = "${module.gke_cluster.endpoint}"
}

output "client_certificate" {
description = "Public certificate used by clients to authenticate to the cluster endpoint."
value = "${module.gke_cluster.client_certificate}"
}

output "client_key" {
description = "Private key used by clients to authenticate to the cluster endpoint."
sensitive = true
value = "${module.gke_cluster.client_key}"
}

output "cluster_ca_certificate" {
description = "The public certificate that is the root of trust for the cluster."
sensitive = true
value = "${module.gke_cluster.cluster_ca_certificate}"
}
36 changes: 36 additions & 0 deletions examples/gke-private-cluster/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# ---------------------------------------------------------------------------------------------------------------------
# REQUIRED PARAMETERS
# These variables are expected to be passed in by the operator.
# ---------------------------------------------------------------------------------------------------------------------

variable "project" {
description = "The project ID where all resources will be launched."
}

variable "location" {
description = "The location (region or zone) of the GKE cluster."
}

variable "region" {
description = "The region for the network. If the cluster is regional, this must be the same region. Otherwise, it should be the region of the zone."
}

# ---------------------------------------------------------------------------------------------------------------------
# OPTIONAL PARAMETERS
# These parameters have reasonable defaults.
# ---------------------------------------------------------------------------------------------------------------------

variable "cluster_name" {
description = "The name of the Kubernetes cluster."
default = "example-private-cluster"
}

variable "cluster_service_account_name" {
description = "The name of the custom service account used for the GKE cluster. This parameter is limited to a maximum of 28 characters."
default = "example-private-cluster-sa"
}

variable "cluster_service_account_description" {
description = "A description of the custom service account used for the GKE cluster."
default = "Example GKE Cluster Service Account managed by Terraform"
}
4 changes: 4 additions & 0 deletions examples/gke-public-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,3 +56,7 @@ your new zones are within the region your cluster is present in.
1. Run `terraform get`.
1. Run `terraform plan`.
1. If the plan looks good, run `terraform apply`.
1. To setup `kubectl` to access the deployed cluster, run `gcloud beta container clusters get-credentials $CLUSTER_NAME
--region $REGION --project $PROJECT`, where `CLUSTER_NAME`, `REGION` and `PROJECT` correspond to what you set for the
input variables.

18 changes: 15 additions & 3 deletions examples/gke-public-cluster/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ terraform {
required_version = ">= 0.10.3"
}

# ---------------------------------------------------------------------------------------------------------------------
# PREPARE PROVIDERS
# ---------------------------------------------------------------------------------------------------------------------

provider "google" {
version = "~> 2.3.0"
project = "${var.project}"
Expand All @@ -22,10 +26,14 @@ provider "google-beta" {
region = "${var.region}"
}

# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY A PUBLIC CLUSTER IN GOOGLE CLOUD
# ---------------------------------------------------------------------------------------------------------------------

module "gke_cluster" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "git::[email protected]:gruntwork-io/gke-cluster.git//modules/gke-cluster?ref=v0.0.1"
# source = "git::[email protected]:gruntwork-io/gke-cluster.git//modules/gke-cluster?ref=v0.0.3"
source = "../../modules/gke-cluster"

name = "${var.cluster_name}"
Expand All @@ -38,9 +46,10 @@ module "gke_cluster" {
cluster_secondary_range_name = "${google_compute_subnetwork.main.secondary_ip_range.0.range_name}"
}

# Node Pool
# ---------------------------------------------------------------------------------------------------------------------
# CREATE A NODE POOL
# ---------------------------------------------------------------------------------------------------------------------

// Node Pool Resource
resource "google_container_node_pool" "node_pool" {
provider = "google-beta"

Expand Down Expand Up @@ -107,6 +116,9 @@ module "gke_service_account" {
description = "${var.cluster_service_account_description}"
}

# ---------------------------------------------------------------------------------------------------------------------
# CREATE A NETWORK TO DEPLOY THE CLUSTER TO
# ---------------------------------------------------------------------------------------------------------------------
# TODO(rileykarson): Add proper VPC network config once we've made a VPC module
resource "random_string" "suffix" {
length = 4
Expand Down
2 changes: 1 addition & 1 deletion examples/gke-public-cluster/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# ---------------------------------------------------------------------------------------------------------------------

variable "project" {
description = "The name of the GCP Project where all resources will be launched."
description = "The project ID where all resources will be launched."
}

variable "location" {
Expand Down
39 changes: 39 additions & 0 deletions modules/gke-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,45 @@ using a shared VPC network (a network from another GCP project) using an explici
See [considerations for cluster sizing](https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#cluster_sizing)
for more information on sizing secondary ranges for your VPC-native cluster.

## What is a private cluster?

In a private cluster, the nodes have internal IP addresses only, which ensures that their workloads are isolated from the public Internet.
Private nodes do not have outbound Internet access, but Private Google Access provides private nodes and their workloads with
limited outbound access to Google Cloud Platform APIs and services over Google's private network.

If you want your cluster nodes to be able to access the Internet, for example pull images from external container registries,
you will have to set up [Cloud NAT](https://cloud.google.com/nat/docs/overview).
See [Example GKE Setup](https://cloud.google.com/nat/docs/gke-example) for further information.

You can create a private cluster by setting `enable_private_nodes` to `true`. Note that with a private cluster, setting
the master CIDR range with `master_ipv4_cidr_block` is also required.

### How do I control access to the cluster master?

In a private cluster, the master has two endpoints:

* **Private endpoint:** This is the internal IP address of the master, behind an internal load balancer in the master's
VPC network. Nodes communicate with the master using the private endpoint. Any VM in your VPC network, and in the same
region as your private cluster, can use the private endpoint.

* **Public endpoint:** This is the external IP address of the master. You can disable access to the public endpoint by setting
`enable_private_endpoint` to `true`.

You can relax the restrictions by authorizing certain address ranges to access the endpoints with the input variable
`master_authorized_networks_config`.

### Private cluster restrictions and limitations

Private clusters have the following restrictions and limitations:

* The size of the RFC 1918 block for the cluster master must be /28.
* The nodes in a private cluster must run Kubernetes version 1.8.14-gke.0 or later.
* You cannot convert an existing, non-private cluster to a private cluster.
* Each private cluster you create uses a unique VPC Network Peering.
* Deleting the VPC peering between the cluster master and the cluster nodes, deleting the firewall rules that allow
ingress traffic from the cluster master to nodes on port 10250, or deleting the default route to the default
Internet gateway, causes a private cluster to stop functioning.

## What IAM roles does this module configure? (unimplemented)

Given a service account, this module will enable the following IAM roles:
Expand Down
Loading

0 comments on commit eda8942

Please sign in to comment.