You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the recovery job is not even launched to the Openshift because there is an error creating the configmap storing rclone config.
Basically the logic in ansible to build the config map creates a too long name which is not supported by OpenShift.
The relevant logs have been attached.
Relevant log output
TASK [Create configmap to save rclone config] *********************************************************************************
fatal: [localhost]: FAILED! => changed=false
error: 422
msg: 'ConfigMap restore-full-20240920113059-20240920125549: Failed to create object: b''{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"ConfigMap \\"restore-full-20240920113059-20240920125549\\" is invalid: metadata.labels: Invalid value: \\"db2-mas-masbrtest-prod-manage-full-20240920113059-20240920125549\\": must be no more than 63 characters","reason":"Invalid","details":{"name":"restore-full-20240920113059-20240920125549","kind":"ConfigMap","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \\"db2-mas-masbrtest-prod-manage-full-20240920113059-20240920125549\\": must be no more than 63 characters","field":"metadata.labels"}]},"code":422}\n'''
reason: Unprocessable Entity
status: 422
The text was updated successfully, but these errors were encountered:
Collection version
ibmmas/cli:10.9.2
Environment information
What happened?
oc login cluster SRC
vi /mnt/home/rclone.conf
[ibm-mas-br]
type = s3
provider = IBMCOS
endpoint = s3.eu-es.cloud-object-storage.appdomain.cloud
access_key_id = xxxx
secret_access_key = xxxx
location_constraint = eu-standard
acl = private
Full backup all manage data for the MAS_INSTANCE_ID instance and MAS_WORKSPACE_ID workspace
export ANSIBLE_LOG_PATH=/tmp/ansible.log
export MASBR_CONFIRM_CLUSTER=false
export MASBR_CREATE_TASK_JOB=true
export MASBR_MASCLI_IMAGE_TAG=10.9.2
export MASBR_ACTION=backup
export MASBR_STORAGE_TYPE=cloud
export MASBR_STORAGE_CLOUD_RCLONE_FILE=/mnt/home/rclone.conf
export MASBR_STORAGE_CLOUD_RCLONE_NAME=ibm-mas-br
export MASBR_STORAGE_CLOUD_BUCKET=masbrdemo
export MAS_INSTANCE_ID=masbrtest
export MAS_WORKSPACE_ID=prod
export DB2_INSTANCE_NAME=mas-masbrtest-prod-manage
ansible-playbook ibm.mas_devops.br_manage
successful backup in S3 bucket.
[ibmmas/cli:10.9.2]mascli$ rclone lsd --no-check-certificate --config /mnt/home/rclone.conf ibm-mas-br:masbrdemo/backups
0 2000-01-01 00:00:00 -1 core-masbrtest-full-20240920113059
0 2000-01-01 00:00:00 -1 db2-mas-masbrtest-prod-manage-full-20240920113059
0 2000-01-01 00:00:00 -1 manage-masbrtest-full-20240920113059
0 2000-01-01 00:00:00 -1 mongodb-masbrtest-full-20240920113059
[ibmmas/cli:10.9.2]mascli$
oc login cluster DST
fresh mas install with same MAS_INSTANCE_ID and MAS_WORKSPACE_ID
db2 recovery from S3
export ANSIBLE_LOG_PATH=/tmp/ansible.log
export MASBR_CONFIRM_CLUSTER=false
export MASBR_CREATE_TASK_JOB=true
export MASBR_MASCLI_IMAGE_TAG=10.9.2
export MASBR_ACTION=restore
export MASBR_STORAGE_TYPE=cloud
export MASBR_STORAGE_CLOUD_RCLONE_FILE=/mnt/home/rclone.conf
export MASBR_STORAGE_CLOUD_RCLONE_NAME=ibm-mas-br
export MASBR_RESTORE_FROM_VERSION=20240920113059
export MASBR_STORAGE_CLOUD_BUCKET=masbrdemo
export MAS_INSTANCE_ID=masbrtest
export MAS_WORKSPACE_ID=prod
export DB2_INSTANCE_NAME=mas-masbrtest-prod-manage
ansible-playbook ibm.mas_devops.br_db2
the recovery job is not even launched to the Openshift because there is an error creating the configmap storing rclone config.
Basically the logic in ansible to build the config map creates a too long name which is not supported by OpenShift.
The relevant logs have been attached.
Relevant log output
The text was updated successfully, but these errors were encountered: