Automate Load balancer configuration & PKS cluster access on GCP.
- Automate PKS deployment on GCP
- Configure GCP SDK client
- Handle PKS cli authentication
You need:
If you want to use Pod Security Policy with your PKS clusters, Enable PodSecuritypolicy Admission plugin in your PKS plan. Not a hard requirement for manage-pks
scripts
Pod security policy is required, if using /utils/istio
script in this project to install Istio.
- Start by configuring PKS API access, please find detailed docs here.
pks login -a PKS-API --client-name CLIENT-NAME --client-secret CLIENT-SECRET -k
- Configure GCP SDK client, log in to GCP
$ gcloud auth login
- Configure GCP compute region, same as AZ configuration for PKS tile
gcloud config set compute/region $GCP_REGION
- Start by provisioning a new cluster, this step will reserve a IP on GCP & will issue create cluster command.
./manage-cluster provision
-
It will take some time to provision cluster, check status with
pks cluster cluster-name
before proceeding to next step. -
Once cluster provision status is
succeeded
, enable access using
./manage-cluster access
and follow instructions. This step will:
- Create Route in Cloud DNS for cluster api
- Create load-balancer
- Configure firewall-rule
- Add master nodes to load-balancer based on tags
- Configure the forwarding rule
- Get credentials using PKS cli & set kubectl context
- If you want to clean up GCP resources, loadbalancer, firewall-rule, forwarding-rule
./manage-cluster cleanup
Above cleanup doesn't deletes the cluster itself, so use pks delete-cluster cluster_name
.