Skip to content

Latest commit

 

History

History
67 lines (51 loc) · 2.2 KB

README.md

File metadata and controls

67 lines (51 loc) · 2.2 KB

Manage PKS on GCP

Automate Load balancer configuration & PKS cluster access on GCP.

Demo

What these scripts are not meant for (YET) !

  • Automate PKS deployment on GCP
  • Configure GCP SDK client
  • Handle PKS cli authentication

Before you begin

You need:

PKS tile plan configuration

If you want to use Pod Security Policy with your PKS clusters, Enable PodSecuritypolicy Admission plugin in your PKS plan. Not a hard requirement for manage-pks scripts

Pod security policy is required, if using /utils/istio script in this project to install Istio.

Instructions

  1. Start by configuring PKS API access, please find detailed docs here.
pks login -a PKS-API --client-name CLIENT-NAME --client-secret CLIENT-SECRET -k
  1. Configure GCP SDK client, log in to GCP
$ gcloud auth login
  1. Configure GCP compute region, same as AZ configuration for PKS tile
gcloud config set compute/region $GCP_REGION
  1. Start by provisioning a new cluster, this step will reserve a IP on GCP & will issue create cluster command.
./manage-cluster provision
  1. It will take some time to provision cluster, check status with pks cluster cluster-name before proceeding to next step.

  2. Once cluster provision status is succeeded, enable access using

./manage-cluster access

and follow instructions. This step will:

  • Create Route in Cloud DNS for cluster api
  • Create load-balancer
  • Configure firewall-rule
  • Add master nodes to load-balancer based on tags
  • Configure the forwarding rule
  • Get credentials using PKS cli & set kubectl context
  1. If you want to clean up GCP resources, loadbalancer, firewall-rule, forwarding-rule
./manage-cluster cleanup

Above cleanup doesn't deletes the cluster itself, so use pks delete-cluster cluster_name.