You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2023-02-24 14:10:38,039 - xtesting.ci.run_tests - INFO - Deployment description:
+-------------------------+----------------------------------------------------------+
| ENV VAR | VALUE |
+-------------------------+----------------------------------------------------------+
| CI_LOOP | daily |
| DEBUG | true |
| DEPLOY_SCENARIO | k8s-nosdn-nofeature-noha |
| INSTALLER_TYPE | unknown |
| BUILD_TAG | |
| NODE_NAME | |
| TEST_DB_URL | http://testresults.opnfv.org/test/api/v1/results |
| TEST_DB_EXT_URL | |
| S3_ENDPOINT_URL | |
| S3_DST_URL | |
| HTTP_DST_URL | |
+-------------------------+----------------------------------------------------------+
2023-02-24 14:10:38,049 - xtesting.ci.run_tests - INFO - Loading test case 'kube_bench_master'...
2023-02-24 14:10:38,418 - xtesting.ci.run_tests - INFO - Running test case 'kube_bench_master'...
2023-02-24 14:30:38,520 - xtesting.ci.run_tests - ERROR -
Please fix the testcase kube_bench_master.
All exceptions should be caught by the testcase instead!
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/xtesting/ci/run_tests.py", line 171, in run_test
test_case.run(**kwargs)
File "/usr/lib/python3.9/site-packages/functest_kubernetes/security/security.py", line 212, in run
self.details["report"] = ast.literal_eval(self.pod_log)
File "/usr/lib/python3.9/ast.py", line 62, in literal_eval
node_or_string = parse(node_or_string, mode='eval')
File "/usr/lib/python3.9/ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 0
SyntaxError: unexpected EOF while parsing
2023-02-24 14:30:38,522 - xtesting.ci.run_tests - ERROR - The test case 'kube_bench_master' failed.
2023-02-24 14:30:38,522 - xtesting.ci.run_tests - INFO - Execution exit value: Result.EX_ERROR
The content of the env file successfully read as you see below the env variables set within the container properly. However their values somehow not respected (e.g., NON_BLOCKING_TAINTS).
If 2 worker nodes not enough, then the test may can use the control and edge nodes too.
kubectl get events -A --watch
NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE
kube-bench-f4dqn 56s Warning FailedScheduling pod/kube-bench-master-t5l89 0/6 nodes are available: 1 node(s) had taint {is_edge: true}, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {is_control: true}, that the pod didn't tolerate.
kube-bench-f4dqn 57s Normal SuccessfulCreate job/kube-bench-master Created pod: kube-bench-master-t5l89
kube-bench-f4dqn 0s Warning FailedScheduling pod/kube-bench-master-t5l89 0/6 nodes are available: 1 node(s) had taint {is_edge: true}, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {is_control: true}, that the pod didn't tolerate.
kube-bench-f4dqn 0s Warning FailedScheduling pod/kube-bench-master-t5l89 0/6 nodes are available: 1 node(s) had taint {is_edge: true}, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {is_control: true}, that the pod didn't tolerate.
kubectl get pods -n kube-bench-f4dqn -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-bench-master-t5l89 0/1 Pending 0 2m53s <none> <none> <none> <none>
sultetveny
changed the title
opnfv/functest-kubernetes-security:v1.23 run_tests -t kube_bench_master
Test opnfv/functest-kubernetes-security:v1.23 run_tests -t kube_bench_master unable to run because of taint
Mar 10, 2023
Overview
I've faced with the following issue running the kubernetes security test using kube_bench_master.
How did you run kube-bench?
What happened?
Test case failed. For more information please check attached file.
functest-kubernetes.debug.log
The content of the env file successfully read as you see below the env variables set within the container properly. However their values somehow not respected (e.g., NON_BLOCKING_TAINTS).
If 2 worker nodes not enough, then the test may can use the control and edge nodes too.
What did you expect to happen:
I expected the test case executed successfully.
Environment
Running processes
Can't schedule the POD.
Configuration files
Anything else you would like to add:
The cluster contains:
The text was updated successfully, but these errors were encountered: