Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stuck at FAILED - RETRYING: Wait for all control plane pods to come up and become ready #177

Open
mickey2012ex opened this issue Jan 10, 2020 · 4 comments

Comments

@mickey2012ex
Copy link

Received below message during the installation

TASK [openshift_control_plane : Wait for all control plane pods to come up and become ready] ******************************************************************
FAILED - RETRYING: Wait for all control plane pods to come up and become ready (72 retries left).

How can i solve it ?

@DavJosKru
Copy link

same issue here

@slaterx
Copy link
Collaborator

slaterx commented Feb 13, 2020

Can you share more details about your attempt, including the environment you are using and your ansible hosts file?

@mickey2012ex
Copy link
Author

mickey2012ex commented Feb 14, 2020

i have solved it by adding below lines

vi /etc/ansible/hosts

<your_ip> ansible_port=22 ansible_user=root ansible_ssh_pass=xxx

vi /etc/hosts

#Openshift
<your_ip> console.localhost(your_domain_name)

vi /etc/environment
http_proxy=http://name:[email protected]:8080
https_proxy=http://name:[email protected]:8080
LANGUAGE="en_US.UTF-8"
LC_ALL="en_US.UTF-8"
no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,.cluster.local,.svc,172.30.0.1,.localhost,.<your_ip>,<your_ip>,console.localhost,apps.localhost,localhost.localdomain"
docker_http_proxy=http://:@proxy.org.hk:8080
docker_https_proxy=http://:@proxy.org.hk:8080
docker_no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,.cluster.local,.svc,172.30.0.1,.localhost,.<your_ip>,<your_ip>,console.localhost,apps.localhost,localhost.localdomain"

replace all the local/domain to your domain name
if you don't have proxy, remove all proxy variables

restart and re-install the openshift

@barry-waldbaum-rl
Copy link

I can see the same problem, GCP single node n1-standard-16 nip io, centos7 . master controllers tuck at CrashLoopBackOff

$ kubectl logs -f master-controllers-me -n kube-system
Error from server: Get https://me:10250/containerLogs/kube-system/master-controllers-me/controllers?follow=true: remote error: tls: internal error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants