-
Notifications
You must be signed in to change notification settings - Fork 68
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: (IAC-1117) dark site deployment
- Loading branch information
1 parent
226a143
commit d46a8e1
Showing
31 changed files
with
1,628 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
# Deploy to AWS EKS in Dark Site or Air-Gapped Site scenario | ||
|
||
This file describes procedures, helper scripts, and example files. First decide on your deployment scenario: | ||
|
||
1. The deployment virtual machine has Internet access but the EKS cluster cannot reach the Internet (dark site) - Follow procedures 1, 2, 4, and 6. | ||
2. The deployment virtual machine and cluster has no Internet access (air-gapped site) - Follow procedures 1, 2, 5, and 6. Note: you'll still need to somehow push all the images and Helm charts to ECR from a machine with Internet access, and the deployment machine will use the private ECR endpoint in the VPC to pull these during install, so the deployment virtual machine won't need Internet access. | ||
|
||
**Notes:** | ||
- The following procedures assume that the `viya4-iac-aws` project was used to deploy the EKS infrastructure. Refer to the `viya4-iac-aws-darksite` folder within the `viya4-iac-aws` [github repo](https://github.com/sassoftware/viya4-iac-aws) for the procedures to follow pertaining to IaC use with an AWS dark site configuration. | ||
- Helper shell scripts under the `viya4-deployment-darksite` folder in this project assume that the deployment virtual machine is properly configured, confirm that: | ||
- kubeconfig file for the EKS cluster has been installed and tested (EKS cluster admin access is verified as working) | ||
- AWS CLI is configured | ||
|
||
# Procedures | ||
|
||
1. **Push Viya4 images to ECR (uses SAS mirrormgr tool):** | ||
- Download deployment assets from my.sas.com | ||
- refer to the `mirrormgr-to-ecr` folder in this repo for helper scripts | ||
|
||
2. **Push 3rd party images to ECR:** | ||
- refer to the `baseline-to-ecr` folder in this repo for helper scripts | ||
- note: OpenLDAP is only required if you are planning to use OpenLDAP for your deployment. Script to automate this is located [here](https://github.com/sassoftware/viya4-deployment/blob/main/viya4-deployment-darksite/baseline-to-ecr/openldap.sh). | ||
|
||
3. **(Optional) If OpenLDAP is needed, modfy local viya4-deployment clone** | ||
- Refer to the [darksite-openldap-mod](https://github.com/sassoftware/viya4-deployment/blob/main/viya4-aws-darksite/darksite-openldap-mod) folder for procedures. You can build the container using the script or do it manually. | ||
|
||
4. **Deployment machine has Internet access - use viya4-deployment for baseline,install** | ||
|
||
1. Use built in variables for baseline configurations in your `ansible-vars.yaml` file: | ||
- Example `ansible-vars.yaml` provided [here](https://github.com/sassoftware/viya4-deployment/blob/main/viya4-aws-darksite/deployment-machine-assets/software/ansible-vars-iac.yaml) | ||
- The goal here is to change the image references to point to ECR versus an Internet facing repo and add cluster subnet ID annotations for the nginx load balancers: | ||
- Replace `{{ AWS_ACCT_ID }}` with your AWS account ID | ||
- Replace `{{ AWS_REGION }}` with your AWS region | ||
- Replace `{{ CONTROLLER_ECR_IMAGE_DIGEST }}` with image digest from ECR | ||
- Replace `{{ WEBHOOK_ECR_IMAGE_DIGEST }}` with image digest from ECR | ||
- If your VPC contains multiple subnets (unrelated to viya), you may need to add annotations to force the NLB to associate with the Viya subnets. More on that topic [here](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/subnet_discovery/). | ||
|
||
2. Deploy viya4-deployment baseline,install. Note: the deployment virtual machine will pull the Helm charts from the Internet during this step. | ||
|
||
5. **Deployment machine has no Internet access - install baseline using Helm charts pulled from ECR** | ||
- Two Options: | ||
1. If using OCI type repo (like ECR), we can use `viya4-deployment` but we'll need to make some changes to the baseline items in `ansible-vars.yaml`. An example provided [here](https://github.com/sassoftware/viya4-deployment/blob/main/viya4-aws-darksite/deployment-machine-assets/software/ansible-vars-iac.yaml) includes the needed variables for OCI Helm support. Pay close attention to `XXX_CHART_URL` and `XXX_CHART_NAME` variables. | ||
2. Use Helm directly to "manually" install baseline items. | ||
- Refer to baseline-helm-install-ecr README.md for instructions. | ||
|
||
6. **viya4-deployment viya,install** | ||
- **Note:** As of `viya4-deployment` v6.0.0, the project uses the Deployment Operator as the default. The deployment operator has additional considerations in a dark site deployment because the repository warehouse for the metadata will not be available without Internet access (as it is pulled from ses.sas.com). | ||
|
||
- There are multiple options to mitigate the issue created by using the Deployment operator: | ||
|
||
1. (Easiest/Recommended) Set `V4_DEPLOYMENT_OPERATOR_ENABLED` to false. This uses the sas-orchestration method for deployment instead of the Deployment Operator (no requirement for offline repository-warehouse hosting is required). | ||
|
||
2. Supply the repository information through an internally deployed http server. SAS doesn't provide instructions on how to do this, because there are a lot of ways to accomplish this. One way to accomplish this is shared in this [TS Track](https://sirius.na.sas.com/Sirius/GSTS/ShowTrack.aspx?trknum=7613552746). | ||
|
||
3. Store required metadata on a file system that can be mounted to the reconciler pod (using a transformer). [TIES Blog for instructions](http://sww.sas.com/blogs/wp/technical-insights/8466/configuring-a-repository-warehouse-for-a-sas-viya-platform-deployment-at-a-dark-site/sukhda/2023/02/28) | ||
|
||
4. Use DAC with `DEPLOY: false` set. This will build the manifests and references in kustomization.yaml and stop there. Then you can proceed with manual installation steps: create site.yaml and apply it to the cluster (just ensure you are using the proper kustomization version!) | ||
|
||
- **Important:** ensure you specify `V4_CFG_CR_URL` in your ansible-vars. This should be your ECR URL + your viya namespace! | ||
example: I used "viya4" as my Viya namespace.... `XXXXX.dkr.ecr.{{AWS_REGION}}.amazonaws.com/viya4` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
#!/bin/bash | ||
|
||
## set variables | ||
AWS_ACCT_ID= | ||
AWS_REGION= | ||
|
||
K8S_minor_version=25 # K8s v1.22.X minor would be 22 ... K8s v1.21.X minor version would be 21. This must match your deployment! | ||
DEPLOYMENT_VERSION=main # main will pull latest release of viya4-deployment. But this can be set to a specific version if needed, example: 5.2.0 | ||
|
||
DOCKER_SUDO= # put sudo here, if you require sudo docker commands... else leave blank |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
#!/bin/bash | ||
|
||
source 00_vars.sh | ||
|
||
. auto_scaler.sh | ||
. cert_manager.sh | ||
. ingress_nginx.sh | ||
. metrics_server.sh | ||
. nfs_subdir_external_provisioner.sh | ||
. openldap.sh | ||
. ebs_driver.sh |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
These scripts assume your aws cli and your kubeconfig is already configured! | ||
|
||
Notes: | ||
- requires helm, yq, and aws cli | ||
- these scripts will install the helm charts and corresponding container images to ECR for each baseline item. | ||
- it will automatically set the chart version based on the version of DAC you specify. | ||
|
||
## Step 1: Set your variables | ||
- Set your variables in 00_vars.sh | ||
|
||
## Step 2: Run script(s) | ||
- Option 1: run 01_run_all.sh (runs all scripts) | ||
- Option 2: run scripts individually |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
#!/bin/bash | ||
|
||
source 00_vars.sh | ||
|
||
# account for v6.3.0+ changes - autoscaler now supports k8s 1.25 | ||
DV=$(echo $DEPLOYMENT_VERSION | sed 's/\.//g') | ||
if [ $DEPLOYMENT_VERSION == "main" ] && [ $K8S_minor_version -ge 25 ]; then | ||
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.autoscalerVersions.PDBv1Support.api.chartVersion') | ||
elif [ $DEPLOYMENT_VERSION == "main" ] && [ $K8S_minor_version -le 24 ]; then | ||
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.autoscalerVersions.PDBv1beta1Support.api.chartVersion') | ||
elif [ $DV -ge 630 ] && [ $K8S_minor_version -ge 25 ]; then | ||
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.autoscalerVersions.PDBv1Support.api.chartVersion') | ||
elif [ $DV -ge 630 ] && [ $K8S_minor_version -le 24 ]; then | ||
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.autoscalerVersions.PDBv1beta1Support.api.chartVersion') | ||
elif [ $DV -le 620 ] ; then | ||
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.CLUSTER_AUTOSCALER_CHART_VERSION') | ||
fi | ||
|
||
## get chart version from viya4-deployment repo | ||
echo "**** cluster-autoscaler ****" | ||
echo "Helm chart version: $CHART_VERSION" | ||
## Get helm chart info | ||
helm repo add autoscaling https://kubernetes.github.io/autoscaler | ||
helm repo update | ||
IMG_REPO=$(helm show values autoscaling/cluster-autoscaler --version=$CHART_VERSION | yq '.image.repository') | ||
TAG=$(helm show values autoscaling/cluster-autoscaler --version=$CHART_VERSION | yq '.image.tag') | ||
echo "Image repo: $IMG_REPO" && echo "Image tag: $TAG" | ||
echo "*********************" | ||
|
||
## pull the image | ||
$DOCKER_SUDO docker pull $IMG_REPO:$TAG | ||
|
||
|
||
# create ECR repo | ||
aws ecr create-repository --no-cli-pager --repository-name cluster-autoscaler | ||
|
||
# push the helm chart to the ECR repo | ||
helm pull autoscaling/cluster-autoscaler --version=$CHART_VERSION | ||
aws ecr get-login-password \ | ||
--region $AWS_REGION | helm registry login \ | ||
--username AWS \ | ||
--password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com | ||
helm push cluster-autoscaler-$CHART_VERSION.tgz oci://$AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/ | ||
rm cluster-autoscaler-$CHART_VERSION.tgz | ||
|
||
# ## update local image tag appropriately | ||
$DOCKER_SUDO docker tag $IMG_REPO:$TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/cluster-autoscaler:$TAG | ||
|
||
|
||
# # ## auth local docker to ecr | ||
aws ecr get-login-password --region $AWS_REGION | $DOCKER_SUDO docker login --username AWS --password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com | ||
|
||
# # ## puch local image to ecr | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/cluster-autoscaler:$TAG |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,57 @@ | ||
#!/bin/bash | ||
|
||
source 00_vars.sh | ||
|
||
|
||
## get chart version from viya4-deployment repo | ||
echo "**** cert-manager ****" | ||
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.CERT_MANAGER_CHART_VERSION') | ||
echo "Helm chart version: $CHART_VERSION" | ||
## Get helm chart info | ||
helm repo add jetstack https://charts.jetstack.io/ | ||
helm repo update | ||
IMG_CONTROLLER=$(helm show values jetstack/cert-manager --version=$CHART_VERSION | yq '.image.repository') | ||
IMG_WEBHOOK=$(helm show values jetstack/cert-manager --version=$CHART_VERSION | yq '.webhook.image.repository') | ||
IMG_CAINJECTOR=$(helm show values jetstack/cert-manager --version=$CHART_VERSION | yq '.cainjector.image.repository') | ||
IMG_STARTUP=$(helm show values jetstack/cert-manager --version=$CHART_VERSION | yq '.startupapicheck.image.repository') | ||
echo "controller repo: $IMG_CONTROLLER" && echo "webhook repo: $IMG_WEBHOOK" && echo "cainject repo: $IMG_CAINJECTOR" && echo "startupapicheck repo: $IMG_STARTUP" | ||
echo "*********************" | ||
|
||
|
||
## pull the images | ||
$DOCKER_SUDO docker pull $IMG_CONTROLLER:v$CHART_VERSION | ||
$DOCKER_SUDO docker pull $IMG_WEBHOOK:v$CHART_VERSION | ||
$DOCKER_SUDO docker pull $IMG_CAINJECTOR:v$CHART_VERSION | ||
$DOCKER_SUDO docker pull $IMG_STARTUP:v$CHART_VERSION | ||
|
||
|
||
# create ECR repos | ||
aws ecr create-repository --no-cli-pager --repository-name cert-manager # this repo is used to store the helm chart | ||
aws ecr create-repository --no-cli-pager --repository-name $IMG_CONTROLLER | ||
aws ecr create-repository --no-cli-pager --repository-name $IMG_WEBHOOK | ||
aws ecr create-repository --no-cli-pager --repository-name $IMG_CAINJECTOR | ||
aws ecr create-repository --no-cli-pager --repository-name $IMG_STARTUP | ||
|
||
# push the helm charts to the ECR repo | ||
helm pull jetstack/cert-manager --version=$CHART_VERSION | ||
aws ecr get-login-password \ | ||
--region $AWS_REGION | helm registry login \ | ||
--username AWS \ | ||
--password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com | ||
helm push cert-manager-v$CHART_VERSION.tgz oci://$AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/ | ||
rm cert-manager-v$CHART_VERSION.tgz | ||
|
||
# ## update local images tags appropriately | ||
$DOCKER_SUDO docker tag $IMG_CONTROLLER:v$CHART_VERSION $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_CONTROLLER:v$CHART_VERSION | ||
$DOCKER_SUDO docker tag $IMG_WEBHOOK:v$CHART_VERSION $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_WEBHOOK:v$CHART_VERSION | ||
$DOCKER_SUDO docker tag $IMG_CAINJECTOR:v$CHART_VERSION $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_CAINJECTOR:v$CHART_VERSION | ||
$DOCKER_SUDO docker tag $IMG_STARTUP:v$CHART_VERSION $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_STARTUP:v$CHART_VERSION | ||
|
||
# # ## auth local $DOCKER_SUDO docker to ecr | ||
aws ecr get-login-password --region $AWS_REGION | $DOCKER_SUDO docker login --username AWS --password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com | ||
|
||
# # ## puch local images to ecr | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_CONTROLLER:v$CHART_VERSION | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_WEBHOOK:v$CHART_VERSION | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_CAINJECTOR:v$CHART_VERSION | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_STARTUP:v$CHART_VERSION |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
#!/bin/bash | ||
|
||
source 00_vars.sh | ||
|
||
## get chart version from viya4-deployment repo | ||
echo -e "\n**** aws-ebs-csi-driver ****" | ||
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.EBS_CSI_DRIVER_CHART_VERSION') | ||
echo "Helm chart version: $CHART_VERSION" | ||
## Get helm chart info | ||
helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver | ||
helm repo update | ||
HELM_CHART=$(helm show all aws-ebs-csi-driver/aws-ebs-csi-driver --version=$CHART_VERSION) | ||
# echo "$HELM_CHART" | ||
IMG_REPO=$(echo "$HELM_CHART" | yq -N '.image.repository | select(. != null)') | ||
IMG_TAG=$(echo "$HELM_CHART" | yq -N '.appVersion | select(. != null)') | ||
PROVISIONER_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.provisioner.image.repository | select(. != null)') | ||
PROVISIONER_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.provisioner.image.tag | select(. != null)') | ||
ATTACHER_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.attacher.image.repository | select(. != null)') | ||
ATTACHER_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.attacher.image.tag | select(. != null)') | ||
SNAPSHOTTER_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.snapshotter.image.repository | select(. != null)') | ||
SNAPSHOTTER_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.snapshotter.image.tag | select(. != null)') | ||
LIVENESS_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.livenessProbe.image.repository | select(. != null)') | ||
LIVENESS_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.livenessProbe.image.tag | select(. != null)') | ||
RESIZER_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.resizer.image.repository | select(. != null)') | ||
RESIZER_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.resizer.image.tag | select(. != null)') | ||
NODEREG_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.nodeDriverRegistrar.image.repository | select(. != null)') | ||
NODEREG_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.nodeDriverRegistrar.image.tag | select(. != null)') | ||
echo "Driver image repo: $IMG_REPO" && echo "Image tag: v$IMG_TAG" | ||
echo "Provisioning image repo: $PROVISIONER_REPO" && echo "Image tag: $PROVISIONER_TAG" | ||
echo "Attacher image repo: $ATTACHER_REPO" && echo "Image tag: $ATTACHER_TAG" | ||
echo "Snapshotter image repo: $SNAPSHOTTER_REPO" && echo "Image tag: $SNAPSHOTTER_TAG" | ||
echo "Liveness image repo: $LIVENESS_REPO" && echo "Image tag: $LIVENESS_TAG" | ||
echo "Resizer image repo: $RESIZER_REPO" && echo "Image tag: $RESIZER_TAG" | ||
echo "NodeDriverRegister image repo: $NODEREG_REP" && echo "Image tag: $NODEREG_TAG" | ||
echo "*********************" | ||
|
||
## pull the image | ||
$DOCKER_SUDO docker pull $IMG_REPO:v$IMG_TAG | ||
$DOCKER_SUDO docker pull $PROVISIONER_REPO:$PROVISIONER_TAG | ||
$DOCKER_SUDO docker pull $ATTACHER_REPO:$ATTACHER_TAG | ||
$DOCKER_SUDO docker pull $SNAPSHOTTER_REPO:$SNAPSHOTTER_TAG | ||
$DOCKER_SUDO docker pull $LIVENESS_REPO:$LIVENESS_TAG | ||
$DOCKER_SUDO docker pull $RESIZER_REPO:$RESIZER_TAG | ||
$DOCKER_SUDO docker pull $NODEREG_REPO:$NODEREG_TAG | ||
|
||
# create ECR repo | ||
aws ecr create-repository --no-cli-pager --repository-name aws-ebs-csi-driver # this is to house to helm chart | ||
aws ecr create-repository --no-cli-pager --repository-name $IMG_REPO | ||
aws ecr create-repository --no-cli-pager --repository-name $PROVISIONER_REPO | ||
aws ecr create-repository --no-cli-pager --repository-name $ATTACHER_REPO | ||
aws ecr create-repository --no-cli-pager --repository-name $SNAPSHOTTER_REPO | ||
aws ecr create-repository --no-cli-pager --repository-name $LIVENESS_REPO | ||
aws ecr create-repository --no-cli-pager --repository-name $RESIZER_REPO | ||
aws ecr create-repository --no-cli-pager --repository-name $NODEREG_REPO | ||
|
||
# push the helm chart to the ECR repo | ||
helm pull aws-ebs-csi-driver/aws-ebs-csi-driver --version=$CHART_VERSION | ||
aws ecr get-login-password \ | ||
--no-cli-pager \ | ||
--region $AWS_REGION | helm registry login \ | ||
--username AWS \ | ||
--password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com | ||
helm push aws-ebs-csi-driver-$CHART_VERSION.tgz oci://$AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/ | ||
rm aws-ebs-csi-driver-$CHART_VERSION.tgz | ||
|
||
# update local image tag appropriately | ||
$DOCKER_SUDO docker tag $IMG_REPO:v$IMG_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_REPO:v$IMG_TAG | ||
$DOCKER_SUDO docker tag $PROVISIONER_REPO:$PROVISIONER_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$PROVISIONER_REPO:$PROVISIONER_TAG | ||
$DOCKER_SUDO docker tag $ATTACHER_REPO:$ATTACHER_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ATTACHER_REPO:$ATTACHER_TAG | ||
$DOCKER_SUDO docker tag $SNAPSHOTTER_REPO:$SNAPSHOTTER_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$SNAPSHOTTER_REPO:$SNAPSHOTTER_TAG | ||
$DOCKER_SUDO docker tag $LIVENESS_REPO:$LIVENESS_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$LIVENESS_REPO:$LIVENESS_TAG | ||
$DOCKER_SUDO docker tag $RESIZER_REPO:$RESIZER_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$RESIZER_REPO:$RESIZER_TAG | ||
$DOCKER_SUDO docker tag $NODEREG_REPO:$NODEREG_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$NODEREG_REPO:$NODEREG_TAG | ||
|
||
# auth local docker to ecr | ||
aws ecr get-login-password --region $AWS_REGION | $DOCKER_SUDO docker login --username AWS --password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com | ||
|
||
# puch local image to ecr | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_REPO:v$IMG_TAG | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$PROVISIONER_REPO:$PROVISIONER_TAG | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ATTACHER_REPO:$ATTACHER_TAG | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$SNAPSHOTTER_REPO:$SNAPSHOTTER_TAG | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$LIVENESS_REPO:$LIVENESS_TAG | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$RESIZER_REPO:$RESIZER_TAG | ||
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$NODEREG_REPO:$NODEREG_TAG |
Oops, something went wrong.