Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: (IAC-1117) dark site deployment #542

Open
wants to merge 13 commits into
base: staging
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 42 additions & 6 deletions .ansible-lint
Original file line number Diff line number Diff line change
@@ -1,17 +1,53 @@
var_naming_pattern: "^[a-zA-Z0-9_]*$"
---
# .ansible-lint

parseable: true
profile: moderate
verbosity: 1
strict: true

# Enforce variable names to follow pattern below, in addition to Ansible own
# requirements, like avoiding python identifiers. To disable add `var-naming`
# to skip_list.
var_naming_pattern: ^[a-zA-Z0-9_]*$

use_default_rules: true

# Ansible-lint is able to recognize and load skip rules stored inside
# `.ansible-lint-ignore` (or `.config/ansible-lint-ignore.txt`) files.
# To skip a rule just enter filename and tag, like "playbook.yml package-latest"
# on a new line.
skip_list:
- role-name # DAC roles names contain dashes, can be ignored
- yaml[line-length] # it's easier to understand/debug the underlying command when it's not broken up
- name[template] # task name uses Jina template, this can be ignored
- var-naming

# Ansible-lint does not automatically load rules that have the 'opt-in' tag.
# You must enable opt-in rules by listing each rule 'id' below.
enable_list:
- args
- empty-string-compare
- no-log-password
- no-same-owner
- yaml

# exclude_paths included in this file are parsed relative to this file's location
# and not relative to the CWD of execution. CLI arguments passed to the --exclude
# option are parsed relative to the CWD of execution.
exclude_paths:
- .git/
- .gitignore
- .cache/
- roles/istio
- roles/vdm/tasks/deploy.yaml # TODO schema[tasks] error for a docker 'Deploy BLT - Deploy SAS Viya' task
- .github/workflows # non ansible files

skip_list:
- unnamed-task
- role-name
- var-naming
# Offline mode disables installation of requirements.yml and schema refreshing
offline: false

# Define required Ansible's variables to satisfy syntax check
extra_vars:
deployment_type: vsphere

warn_list:
- experimental
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/linter-analysis.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@v3
uses: actions/checkout@v4

- name: Run Hadolint Action
uses: jbergstroem/[email protected]
Expand All @@ -25,7 +25,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@v3
uses: actions/checkout@v4

# .shellcheckrc is read from the current dir
- name: Copy Config to Parent Level Directory
Expand All @@ -42,7 +42,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@v3
uses: actions/checkout@v4

# The latest ansible/ansible-lint-action removed the
# ability to specify configs from other dirs
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
.galaxy_install_info

## ignore ansible-vars.yml
.pre-commit-config.yaml
ansible-vars.yml
ansible-vars.yaml

Expand Down
1 change: 1 addition & 0 deletions linting-configs/.ansible-lint
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ exclude_paths:
- roles/istio
- roles/vdm/tasks/deploy.yaml # TODO schema[tasks] error for a docker 'Deploy BLT - Deploy SAS Viya' task
- .github/workflows # non ansible files
- viya4-deployment-darksite/deployment-machine-assets/software/ansible-vars-iac.yaml # dark site ansible-vars.yaml file template

# Offline mode disables installation of requirements.yml and schema refreshing
offline: false
Expand Down
13 changes: 13 additions & 0 deletions viya4-deployment-darksite/baseline-to-ecr/00_vars.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
#!/bin/bash
jarpat marked this conversation as resolved.
Show resolved Hide resolved

# Copyright © 2020-2024, SAS Institute Inc., Cary, NC, USA. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

## set variables
AWS_ACCT_ID=
AWS_REGION=

K8S_minor_version=25 # K8s v1.22.X minor would be 22 ... K8s v1.21.X minor version would be 21. This must match your deployment!
DEPLOYMENT_VERSION=main # main will pull latest release of viya4-deployment. But this can be set to a specific version if needed, example: 5.2.0

DOCKER_SUDO= # put sudo here, if you require sudo docker commands... else leave blank
14 changes: 14 additions & 0 deletions viya4-deployment-darksite/baseline-to-ecr/01_run_all.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/bin/bash

# Copyright © 2020-2024, SAS Institute Inc., Cary, NC, USA. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

source 00_vars.sh

. auto_scaler.sh
. cert_manager.sh
. ingress_nginx.sh
. metrics_server.sh
. nfs_subdir_external_provisioner.sh
. openldap.sh
. ebs_driver.sh
13 changes: 13 additions & 0 deletions viya4-deployment-darksite/baseline-to-ecr/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
These scripts assume your aws cli and your kubeconfig is already configured!

Notes:
- requires helm, yq, and aws cli
- these scripts will install the helm charts and corresponding container images to ECR for each baseline item.
- it will automatically set the chart version based on the version of DAC you specify.

## Step 1: Set your variables
- Set your variables in 00_vars.sh

## Step 2: Run script(s)
- Option 1: run 01_run_all.sh (runs all scripts)
- Option 2: run scripts individually
57 changes: 57 additions & 0 deletions viya4-deployment-darksite/baseline-to-ecr/auto_scaler.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
#!/bin/bash

# Copyright © 2020-2024, SAS Institute Inc., Cary, NC, USA. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

source 00_vars.sh

# account for v6.3.0+ changes - autoscaler now supports k8s 1.25
DV=$(echo $DEPLOYMENT_VERSION | sed 's/\.//g')
if [ $DEPLOYMENT_VERSION == "main" ] && [ $K8S_minor_version -ge 25 ]; then
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.autoscalerVersions.PDBv1Support.api.chartVersion')
elif [ $DEPLOYMENT_VERSION == "main" ] && [ $K8S_minor_version -le 24 ]; then
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.autoscalerVersions.PDBv1beta1Support.api.chartVersion')
elif [ $DV -ge 630 ] && [ $K8S_minor_version -ge 25 ]; then
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.autoscalerVersions.PDBv1Support.api.chartVersion')
elif [ $DV -ge 630 ] && [ $K8S_minor_version -le 24 ]; then
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.autoscalerVersions.PDBv1beta1Support.api.chartVersion')
elif [ $DV -le 620 ] ; then
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.CLUSTER_AUTOSCALER_CHART_VERSION')
fi

## get chart version from viya4-deployment repo
echo "**** cluster-autoscaler ****"
echo "Helm chart version: $CHART_VERSION"
## Get helm chart info
helm repo add autoscaling https://kubernetes.github.io/autoscaler
helm repo update
IMG_REPO=$(helm show values autoscaling/cluster-autoscaler --version=$CHART_VERSION | yq '.image.repository')
TAG=$(helm show values autoscaling/cluster-autoscaler --version=$CHART_VERSION | yq '.image.tag')
echo "Image repo: $IMG_REPO" && echo "Image tag: $TAG"
echo "*********************"

## pull the image
$DOCKER_SUDO docker pull $IMG_REPO:$TAG


# create ECR repo
aws ecr create-repository --no-cli-pager --repository-name cluster-autoscaler

# push the helm chart to the ECR repo
helm pull autoscaling/cluster-autoscaler --version=$CHART_VERSION
aws ecr get-login-password \
--region $AWS_REGION | helm registry login \
--username AWS \
--password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
helm push cluster-autoscaler-$CHART_VERSION.tgz oci://$AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/
rm cluster-autoscaler-$CHART_VERSION.tgz

# ## update local image tag appropriately
$DOCKER_SUDO docker tag $IMG_REPO:$TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/cluster-autoscaler:$TAG


# # ## auth local docker to ecr
aws ecr get-login-password --region $AWS_REGION | $DOCKER_SUDO docker login --username AWS --password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com

# # ## puch local image to ecr
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/cluster-autoscaler:$TAG
57 changes: 57 additions & 0 deletions viya4-deployment-darksite/baseline-to-ecr/cert_manager.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
#!/bin/bash

# Copyright © 2020-2024, SAS Institute Inc., Cary, NC, USA. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

source 00_vars.sh

thpang marked this conversation as resolved.
Show resolved Hide resolved
## get chart version from viya4-deployment repo
echo "**** cert-manager ****"
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.CERT_MANAGER_CHART_VERSION')
echo "Helm chart version: $CHART_VERSION"
## Get helm chart info
helm repo add jetstack https://charts.jetstack.io/
helm repo update
IMG_CONTROLLER=$(helm show values jetstack/cert-manager --version=$CHART_VERSION | yq '.image.repository')
IMG_WEBHOOK=$(helm show values jetstack/cert-manager --version=$CHART_VERSION | yq '.webhook.image.repository')
IMG_CAINJECTOR=$(helm show values jetstack/cert-manager --version=$CHART_VERSION | yq '.cainjector.image.repository')
IMG_STARTUP=$(helm show values jetstack/cert-manager --version=$CHART_VERSION | yq '.startupapicheck.image.repository')
echo "controller repo: $IMG_CONTROLLER" && echo "webhook repo: $IMG_WEBHOOK" && echo "cainject repo: $IMG_CAINJECTOR" && echo "startupapicheck repo: $IMG_STARTUP"
echo "*********************"

thpang marked this conversation as resolved.
Show resolved Hide resolved
## pull the images
$DOCKER_SUDO docker pull $IMG_CONTROLLER:v$CHART_VERSION
$DOCKER_SUDO docker pull $IMG_WEBHOOK:v$CHART_VERSION
$DOCKER_SUDO docker pull $IMG_CAINJECTOR:v$CHART_VERSION
$DOCKER_SUDO docker pull $IMG_STARTUP:v$CHART_VERSION

# create ECR repos
aws ecr create-repository --no-cli-pager --repository-name cert-manager # this repo is used to store the helm chart
aws ecr create-repository --no-cli-pager --repository-name $IMG_CONTROLLER
aws ecr create-repository --no-cli-pager --repository-name $IMG_WEBHOOK
aws ecr create-repository --no-cli-pager --repository-name $IMG_CAINJECTOR
aws ecr create-repository --no-cli-pager --repository-name $IMG_STARTUP

# push the helm charts to the ECR repo
helm pull jetstack/cert-manager --version=$CHART_VERSION
aws ecr get-login-password \
--region $AWS_REGION | helm registry login \
--username AWS \
--password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
helm push cert-manager-v$CHART_VERSION.tgz oci://$AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/
rm cert-manager-v$CHART_VERSION.tgz

# ## update local images tags appropriately
$DOCKER_SUDO docker tag $IMG_CONTROLLER:v$CHART_VERSION $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_CONTROLLER:v$CHART_VERSION
$DOCKER_SUDO docker tag $IMG_WEBHOOK:v$CHART_VERSION $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_WEBHOOK:v$CHART_VERSION
$DOCKER_SUDO docker tag $IMG_CAINJECTOR:v$CHART_VERSION $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_CAINJECTOR:v$CHART_VERSION
$DOCKER_SUDO docker tag $IMG_STARTUP:v$CHART_VERSION $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_STARTUP:v$CHART_VERSION

# # ## auth local $DOCKER_SUDO docker to ecr
aws ecr get-login-password --region $AWS_REGION | $DOCKER_SUDO docker login --username AWS --password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com

# # ## puch local images to ecr
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_CONTROLLER:v$CHART_VERSION
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_WEBHOOK:v$CHART_VERSION
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_CAINJECTOR:v$CHART_VERSION
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_STARTUP:v$CHART_VERSION
88 changes: 88 additions & 0 deletions viya4-deployment-darksite/baseline-to-ecr/ebs_driver.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
#!/bin/bash

# Copyright © 2020-2024, SAS Institute Inc., Cary, NC, USA. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

source 00_vars.sh

## get chart version from viya4-deployment repo
echo -e "\n**** aws-ebs-csi-driver ****"
CHART_VERSION=$(curl -s https://raw.githubusercontent.com/sassoftware/viya4-deployment/$DEPLOYMENT_VERSION/roles/baseline/defaults/main.yml | yq '.EBS_CSI_DRIVER_CHART_VERSION')
echo "Helm chart version: $CHART_VERSION"
## Get helm chart info
helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm repo update
HELM_CHART=$(helm show all aws-ebs-csi-driver/aws-ebs-csi-driver --version=$CHART_VERSION)
# echo "$HELM_CHART"
IMG_REPO=$(echo "$HELM_CHART" | yq -N '.image.repository | select(. != null)')
IMG_TAG=$(echo "$HELM_CHART" | yq -N '.appVersion | select(. != null)')
PROVISIONER_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.provisioner.image.repository | select(. != null)')
PROVISIONER_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.provisioner.image.tag | select(. != null)')
ATTACHER_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.attacher.image.repository | select(. != null)')
ATTACHER_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.attacher.image.tag | select(. != null)')
SNAPSHOTTER_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.snapshotter.image.repository | select(. != null)')
SNAPSHOTTER_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.snapshotter.image.tag | select(. != null)')
LIVENESS_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.livenessProbe.image.repository | select(. != null)')
LIVENESS_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.livenessProbe.image.tag | select(. != null)')
RESIZER_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.resizer.image.repository | select(. != null)')
RESIZER_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.resizer.image.tag | select(. != null)')
NODEREG_REPO=$(echo "$HELM_CHART" | yq -N '.sidecars.nodeDriverRegistrar.image.repository | select(. != null)')
NODEREG_TAG=$(echo "$HELM_CHART" | yq -N '.sidecars.nodeDriverRegistrar.image.tag | select(. != null)')
echo "Driver image repo: $IMG_REPO" && echo "Image tag: v$IMG_TAG"
echo "Provisioning image repo: $PROVISIONER_REPO" && echo "Image tag: $PROVISIONER_TAG"
echo "Attacher image repo: $ATTACHER_REPO" && echo "Image tag: $ATTACHER_TAG"
echo "Snapshotter image repo: $SNAPSHOTTER_REPO" && echo "Image tag: $SNAPSHOTTER_TAG"
echo "Liveness image repo: $LIVENESS_REPO" && echo "Image tag: $LIVENESS_TAG"
echo "Resizer image repo: $RESIZER_REPO" && echo "Image tag: $RESIZER_TAG"
echo "NodeDriverRegister image repo: $NODEREG_REP" && echo "Image tag: $NODEREG_TAG"
echo "*********************"

## pull the image
$DOCKER_SUDO docker pull $IMG_REPO:v$IMG_TAG
$DOCKER_SUDO docker pull $PROVISIONER_REPO:$PROVISIONER_TAG
$DOCKER_SUDO docker pull $ATTACHER_REPO:$ATTACHER_TAG
$DOCKER_SUDO docker pull $SNAPSHOTTER_REPO:$SNAPSHOTTER_TAG
$DOCKER_SUDO docker pull $LIVENESS_REPO:$LIVENESS_TAG
$DOCKER_SUDO docker pull $RESIZER_REPO:$RESIZER_TAG
$DOCKER_SUDO docker pull $NODEREG_REPO:$NODEREG_TAG

# create ECR repo
aws ecr create-repository --no-cli-pager --repository-name aws-ebs-csi-driver # this is to house to helm chart
aws ecr create-repository --no-cli-pager --repository-name $IMG_REPO
aws ecr create-repository --no-cli-pager --repository-name $PROVISIONER_REPO
aws ecr create-repository --no-cli-pager --repository-name $ATTACHER_REPO
aws ecr create-repository --no-cli-pager --repository-name $SNAPSHOTTER_REPO
aws ecr create-repository --no-cli-pager --repository-name $LIVENESS_REPO
aws ecr create-repository --no-cli-pager --repository-name $RESIZER_REPO
aws ecr create-repository --no-cli-pager --repository-name $NODEREG_REPO

# push the helm chart to the ECR repo
helm pull aws-ebs-csi-driver/aws-ebs-csi-driver --version=$CHART_VERSION
aws ecr get-login-password \
--no-cli-pager \
--region $AWS_REGION | helm registry login \
--username AWS \
--password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
helm push aws-ebs-csi-driver-$CHART_VERSION.tgz oci://$AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/
rm aws-ebs-csi-driver-$CHART_VERSION.tgz

# update local image tag appropriately
$DOCKER_SUDO docker tag $IMG_REPO:v$IMG_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_REPO:v$IMG_TAG
$DOCKER_SUDO docker tag $PROVISIONER_REPO:$PROVISIONER_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$PROVISIONER_REPO:$PROVISIONER_TAG
$DOCKER_SUDO docker tag $ATTACHER_REPO:$ATTACHER_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ATTACHER_REPO:$ATTACHER_TAG
$DOCKER_SUDO docker tag $SNAPSHOTTER_REPO:$SNAPSHOTTER_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$SNAPSHOTTER_REPO:$SNAPSHOTTER_TAG
$DOCKER_SUDO docker tag $LIVENESS_REPO:$LIVENESS_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$LIVENESS_REPO:$LIVENESS_TAG
$DOCKER_SUDO docker tag $RESIZER_REPO:$RESIZER_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$RESIZER_REPO:$RESIZER_TAG
$DOCKER_SUDO docker tag $NODEREG_REPO:$NODEREG_TAG $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$NODEREG_REPO:$NODEREG_TAG

# auth local docker to ecr
aws ecr get-login-password --region $AWS_REGION | $DOCKER_SUDO docker login --username AWS --password-stdin $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com

# puch local image to ecr
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMG_REPO:v$IMG_TAG
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$PROVISIONER_REPO:$PROVISIONER_TAG
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$ATTACHER_REPO:$ATTACHER_TAG
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$SNAPSHOTTER_REPO:$SNAPSHOTTER_TAG
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$LIVENESS_REPO:$LIVENESS_TAG
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$RESIZER_REPO:$RESIZER_TAG
$DOCKER_SUDO docker push $AWS_ACCT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$NODEREG_REPO:$NODEREG_TAG
Loading