-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CASMPET-7117: iSCSI SBPS provision and DNS config fails when HSN is not configured/ CASMPET-7126: CFS play for k8s labelling fails when it is already applied #284
Conversation
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_get_host_hsn_nmn.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.lio_config/files/provision_iscsi_server.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.lio_config/files/provision_iscsi_server.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.lio_config/files/provision_iscsi_server.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
Previously I didn't notice all the hard-coded references to |
Please run the shell scripts through |
ansible/roles/csm.sbps.dns_srv_records/files/sbps_dns_srv_records.sh
Outdated
Show resolved
Hide resolved
I have addressed all the "shellcheck" thrown warnings except the one below from file apply_k8s_label.sh
It is suggesting to use "-q" (quite option) but that does not work since the cmd returns "0" always (for both the cases where label is present / not present. Just that it won't print the output to stdout). Also, not opting to use "--overwrite" option just for the sake of avoiding this warning from "shellcheck". Since ansible is idempotent, just don't want to overwrite config. |
It works fine for me:
You have to remove both
|
@ravikanth-nalla-hpe, this looks good. Fix the final shellcheck error and this is ready to be merged, please squash everything down to a single commit. |
932cc8a
to
3c42bff
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
@mharding-hpe Can you please merge this PR to master branch? |
@@ -42,4 +42,4 @@ | |||
# Configure SBPS DNS "SRV" and "A" records | |||
- role: csm.sbps.dns_srv_records | |||
# Mount s3 bucket 'boot-images' using s3fs read-only policy for SBPS agent | |||
- role: csm.sbps.mount_s3_images | |||
#- role: csm.sbps.mount_s3_images |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want to check in commented lines. If this no longer is needed, we should just remove the line.
#- role: csm.sbps.mount_s3_images |
if ! kubectl get nodes -l $LABEL | grep -q "$HOST_NAME" | ||
then | ||
kubectl label nodes "$HOST_NAME" $LABEL | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if ! kubectl get nodes -l $LABEL | grep -q "$HOST_NAME" | |
then | |
kubectl label nodes "$HOST_NAME" $LABEL | |
fi | |
# If the label is already applied to this node, then exit | |
kubectl get nodes -l "$LABEL" --no-headers | grep -Eq "^${HOST_NAME}[[:space:]]" && exit 0 | |
# Otherwise, apply the label to the node | |
kubectl label nodes "$HOST_NAME" "$LABEL" |
First we will need to merge it into develop, once we resolve the couple of small review comments. Then we can merge it to master. |
You'll also need to rebase on top of the changes that have been made to |
CASMPET-7126: CFS play for k8s labelling fails when it is already applied - fixed iSCSI SBPS provisioning to exclude HSN portal config when HSN is not configured - fixed to avoid DNS SRV and A records creation for HSN when HSN is not configured - fixed to avoid applying k8s node lebel when it is already applied
2e8f978
to
68e7187
Compare
Summary and Scope
CASMPET-7117: iSCSI SBPS: LIO provision and DNS records config fails when HSN is not configured
CASMPET-7126: iSCSI SBPS: k8s labelling fails when it is already applied
Issues and Related PRs
Original EPIC: https://jira-pro.it.hpe.com:8443/browse/CASMPET-6797
Testing
Tested on bare metal systems with multi worker nodes
Tested on:
"starlord" and "Odin"
Test description:
verified LIO provisioning where HSN n/w is not configured, where it does not fail the overall config instead ignores HSN and creates rest of the portals (NMN and CMN).
verified LIO provisioning where HSN n/w is configured, it creates all three n/w portals HSN, NMN and CMN.
verified DNS "SRV" and "A" records creation where HSN n/w is not configured, it creates “SRV” and “A” records for NMN without any failure.
verified DNS "SRV" and "A" records creation where HSN n/w is configured, it creates “SRV” and “A” records for both HSN and NMN without any failure.
verified k8s node labelling fresh and also when it is already applied. K8s node labelling is successful without any failure in both cases.
Were the install/upgrade-based validation checks/tests run (goss tests/install-validation doc)?
Were continuous integration tests run? If not, why?
Was upgrade tested? If not, why?
Was downgrade tested? If not, why?
Were new tests (or test issues/Jiras) created for this change?
Risks and Mitigations
None
Pull Request Checklist