Redhat Openshift 의 오픈소스 버전
인 OKD Cluster 를 생성하고 사용해본다. ( OKD 4.12 버전 기준 : k8s 1.25)
OKD 설명 참고 : https://velog.io/@_gyullbb/OKD-%EA%B0%9C%EC%9A%94
-
환경 정보
- OKD Console : https://console-openshift-console.apps.okd4.ktdemo.duckdns.org/
- OKD API : https://api.okd4.ktdemo.duckdns.org:6443
- Minio Object Storage : https://minio-minino.apps.okd4.ktdemo.duckdns.org/
- Harbor Private Docker Registry : https://myharbor.apps.okd4.ktdemo.duckdns.org/
-
도메인 생성
-
설치 환경 인프라 구성
-
Bastion 서버 설치 및 설정
-
Cluster 생성 준비
-
Cluster 생성
-
Cluster 생성 후 할 일
-
Cloud shell 설치 및 core os 설정
-
ArgoCD 설치
-
Minio Object Stroage 설치 ( w/ NFS )
-
Dynamic Provisioning 설치
-
Harbor ( Private Registry ) 설치 및 설정
-
Compute ( Worker Node ) Join 하기
-
etcd 백업하기
OKD 설치를 위해서는 Static IP 가 필요하지만 Static IP 가 없는 경우 도메인이 필요하고 아래 방법으로 무료 도메인 ( duckdns )을 생성한다.
https://www.duckdns.org/ 로 접속하여 가입을 하고 본인의 공유기의 WAN IP 와 도메인을 매칭 시킨다. WAN IP가 바뀌어도 주기적으로 update 된다.
ktdemo.duckdns.org 로 생성 을 한다. ip 를 변경하고 싶으면 ip를 수동으로 입력하고 update 한다.
위의 도메인이 우리가 설치하는 base 도메인이 된다.
설치에 필요한 Node는 총 3 개이고 boostrap 서버는 master 노드 설치 이후 제거 가능하다.
서버구분 | Hypervisor | IP | hostname | 용도 | OS | Spec | 기타 |
---|---|---|---|---|---|---|---|
VM | proxmox | 192.168.1.1.247 | bastion.okd4.ktdemo.duckdns.org | Bastion(LB,DNS) | Centos 8 Stream | 2 core / 4 G / 30G | |
VM | proxmox | 192.168.1.1.128 | bootstrap.okd4.ktdemo.duckdns.org | Bootstrap | Fedora Core OS 37 | 2 core / 6 G / 40G | |
VM | vmware | 192.168.1.1.146 | okd-1.okd4.ktdemo.duckdns.org | Master/Worker | Fedora Core OS 37 | 8 core / 20 G / 200G | Base OS 윈도우 11 |
VM | proxmox | 192.168.1.1.148 | okd-2.okd4.ktdemo.duckdns.org | Worker | Fedora Core OS 37 | 2 core / 16 G / 300G | 워커 노드 추가 |
S 윈도우 11 | |||||||
VM | proxmox | 192.168.1.1.149 | okd-3.okd4.ktdemo.duckdns.org | Worker | Fedora Core OS 37 | 4 core / 8 G / 300G | 워커 노드 추가 |
bastion 서버는 centos 8 stream 으로 proxmox 서버에 설치 한다.
설치 과정은 생략한다.
설치 이후에 centos에서 네트웍 활성화를 해야 IP 받아온다.
vi 에디터 와 tar, wget 라이브러리 설치
[root@localhost shclub]# dnf install -y vim bash-completion tcpdump tar wget
[root@localhost shclub]# hostnamectl set-hostname bastion.okd4.ktdemo.duckdns.org
[root@localhost shclub]# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
방화벽을 disable 한다.
[root@localhost shclub]# systemctl disable firewalld --now
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
bastion server는 resolv.conf 의 nameserver 설정 을 확인해야 하는 데 외부 라이브러리 설치를 하기 위해서는 공유기의 IP로 설정을 한다. ( search는 상관 없음 )
[root@bastion shclub]# vi /etc/resolv.conf
# Generated by NetworkManager
search okd4.ktdemo.duckdns.org
nameserver 192.168.1.1
HA Proxy ( L7 ) 를 설치한다.
[root@localhost shclub]# yum install -y haproxy
configuration 을 설정한다.
[root@bastion shclub]# vi /etc/haproxy/haproxy.cfg
# Global settings
#---------------------------------------------------------------------
global
maxconn 20000
log /dev/log local0 info
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option redispatch
option forwardfor except 127.0.0.0/8
retries 3
maxconn 20000
timeout http-request 10000ms
timeout http-keep-alive 10000ms
timeout check 10000ms
timeout connect 40000ms
timeout client 300000ms
timeout server 300000ms
timeout queue 50000ms
# Enable HAProxy stats
listen stats
bind :9000
mode http
stats enable
stats uri /
stats refresh 5s
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
server static 127.0.0.1:4331 check
# OKD API Server
frontend openshift_api_frontend
bind *:6443
default_backend openshift_api_backend
mode tcp
option tcplog
backend openshift_api_backend
mode tcp
balance source
server bootstrap 192.168.1.128:6443 check # bootstrap 서버
server okd-1 192.168.1.146:6443 check # okd master 설정
server okd-2 192.168.1.148:6443 check # okd worker 설정
server okd-3 192.168.1.149:6443 check # 추가 okd worker 설정
# OKD Machine Config Server
frontend okd_machine_config_server_frontend
mode tcp
bind *:22623
default_backend okd_machine_config_server_backend
backend okd_machine_config_server_backend
mode tcp
balance source
server bootstrap 192.168.1.128:22623 check # bootstrap 서버
server okd-1 192.168.1.146:22623 check # okd master 설정
server okd-2 192.168.1.148:22623 check # okd worker 설정
server okd-3 192.168.1.149:22623 check # 추가 okd worker 설정
# OKD Ingress - layer 4 tcp mode for each. Ingress Controller will handle layer 7.
frontend okd_http_ingress_frontend
bind *:80
default_backend okd_http_ingress_backend
mode tcp
backend okd_http_ingress_backend
balance source
mode tcp
server okd-1 192.168.1.146:80 check # okd master설정
server okd-2 192.168.1.148:80 check # okd worker 설정
server okd-3 192.168.1.149:80 check # 추가 okd worker 설정
frontend okd_https_ingress_frontend
bind *:443
default_backend okd_https_ingress_backend
mode tcp
backend okd_https_ingress_backend
mode tcp
balance source
server okd-1 192.168.1.146:443 check
server okd-2 192.168.1.148:443 check
server okd-3 192.168.1.148:443 check # 추가 okd worker 설정
haproxy를 enable 해서 활성화 한다.
[root@localhost shclub]# systemctl enable haproxy --now
서비스 활성화시에 애러가 발생하면 status를 확인한다.
[root@bastion shclub]# systemctl status haproxy
bind 에러가 나는 경우 아래와 같이 설정한다.
[root@bastion shclub]# setsebool -P haproxy_connect_any=1
[root@bastion shclub]# systemctl restart haproxy
bootstrap , master , worker 노드를 생성하기 위해서는 bastion에 web 서버를 구성하여
ignition 화일을 다운 받아 설치를 한다. ( 여기서는 Apache를 설치한다. )
HTTP 설치 후 기본 80 포트 구성을 8080으로 변경한 후 서비스를 시작합니다.
[root@localhost ~]# dnf install -y httpd
[root@localhost ~]# vi /etc/httpd/conf/httpd.conf
[root@localhost ~]# cat /etc/httpd/conf/httpd.conf | grep Listen
# Listen: Allows you to bind Apache to specific IP addresses and/or
# Change this to Listen on specific IP addresses as shown below to
#Listen 12.34.56.78:80
Listen 8080
Apache web server를 활성화 하고 재기동 한다.
[root@localhost ~]# systemctl enable httpd --now
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@localhost ~]# systemctl restart httpd
bind 를 설치하고 DNS 서버를 구성하자.
[root@localhost ~]# dnf install -y bind bind-utils
[root@localhost ~]# systemctl enable named --now
Created symlink /etc/systemd/system/multi-user.target.wants/named.service → /usr/lib/systemd/system/named.service.
/etc/named.conf 화일을 수정한다.
[root@bastion shclub]# vi /etc/named.conf
options {
listen-on port 53 { any; };
listen-on-v6 port 53 { none; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { any; };
/*
- If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
- If you are building a RECURSIVE (caching) DNS server, you need to enable
recursion.
- If your recursive DNS server has a public IP address, you MUST enable access
control to limit queries to your legitimate users. Failing to do so will
cause your server to become part of large scale DNS amplification
attacks. Implementing BCP38 within your network would greatly
reduce such attack surface
*/
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
/* https://fedoraproject.org/wiki/Changes/CryptoPolicy */
include "/etc/crypto-policies/back-ends/bind.config";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
/etc/named.rfc1912.zones 화일을 수정한다.
아래 2개의 존을 설정해야 한다.
- ktdemo.duckdns.org : DNS 정방향
- 1.168.192.arpa : DNS 역방향 ( 192.168.1 의 반대로 설정 )
zone "ktdemo.duckdns.org" IN {
type master;
file "/var/named/okd4.ktdemo.duckdns.org.zone";
allow-update { none; };
};
zone "1.168.192.arpa" IN {
type master;
file "/var/named/1.168.192.in-addr.rev";
allow-update { none; };
};
[root@bastion named]# vi /etc/named.rfc1912.zones
zone "localhost.localdomain" IN {
type master;
file "named.localhost";
allow-update { none; };
};
zone "localhost" IN {
type master;
file "named.localhost";
allow-update { none; };
};
zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
type master;
file "named.loopback";
allow-update { none; };
};
zone "1.0.0.127.in-addr.arpa" IN {
type master;
file "named.loopback";
allow-update { none; };
};
zone "0.in-addr.arpa" IN {
type master;
file "named.empty";
allow-update { none; };
};
zone "ktdemo.duckdns.org" IN {
type master;
file "/var/named/okd4.ktdemo.duckdns.org.zone";
allow-update { none; };
};
zone "1.168.192.arpa" IN {
type master;
file "/var/named/1.168.192.in-addr.rev";
allow-update { none; };
};
/var/named 폴더로 이동한다.
[root@bastion shclub]# cd /var/named
okd4.ktdemo.duckdns.org.zone 파일 설정 ( DNS 정방향 )
- ip와 hostname을 잘 수정한다.
[root@bastion named]# ls
data dynamic named.ca named.empty named.localhost named.loopback slaves
[root@bastion named]# vi okd4.ktdemo.duckdns.org.zone
$TTL 1D
@ IN SOA @ ns.ktdemo.duckdns.org. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS ns.ktdemo.duckdns.org.
@ IN A 192.168.1.247 ;
; Ancillary services
lb.okd4 IN A 192.168.1.247
; Bastion or Jumphost
ns IN A 192.168.1.247 ;
; OKD Cluster
bastion.okd4 IN A 192.168.1.247
bootstrap.okd4 IN A 192.168.1.128
okd-1.okd4 IN A 192.168.1.146
api.okd4 IN A 192.168.1.247
api-int.okd4 IN A 192.168.1.247
*.apps.okd4 IN A 192.168.1.247
1.168.192.in-addr.rev 파일 설정 ( DNS 역방향 )
[root@bastion named]# vi 1.168.192.in-addr.rev
$TTL 1D
@ IN SOA ktdemo.duckdns.org. ns.ktdemo.duckdns.org. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS ns.
247 IN PTR ns.
247 IN PTR bastion.okd4.ktdemo.duckdns.org.
128 IN PTR bootstrap.okd4.ktdemo.duckdns.org.
146 IN PTR okd-1.okd4.ktdemo.duckdns.org.
247 IN PTR api.okd4.ktdemo.duckdns.org.
247 IN PTR api-int.okd4.ktdemo.duckdns.org.
zone 파일 권한 설정을 하고 named 서비스를 재기동한다.
[root@bastion named]# chown root:named okd4.ktdemo.duckdns.org.zone
[root@bastion named]# chown root:named 1.168.192.in-addr.rev
[root@bastion named]# systemctl restart named
이제 bastion 서버의 기본 설정을 완료를 하였다.
bastion 서버의 root 폴더로 이동한다.
[root@bastion named]# cd ~/
rsa key를 생성하고 엔터를 계속 치면 2개의 화일이 .ssh 폴더에 생성이 된다.
- id_rsa : private key
- id_rsa.pub : public key ( bootstrap , master/worker 에 설치 될 key )
[root@localhost ~]# ssh-keygen -t rsa -b 4096 -N ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:QDIN4njCh9DzWBhgeUu3gye3VldDPb2*****jsXr**l4o [email protected]
The key's randomart image is:
+---[RSA 4096]----+
|o++o+o. ... . |
|++=+.=. o o . |
|oo=*+ o . . . .|
| oo+.= o . . o |
| + + S + .|
| o o .|
| . . .=|
| .....*|
| E.o++.|
+----[SHA256]-----+
oc 실행 바이너리 와 openshift-install 바이너리 다운로드 하고 압축을 푼다.
[root@bastion ~]# wget https://github.com/openshift/okd/releases/download/4.10.0-0.okd-2022-03-07-131213/openshift-install-linux-4.10.0-0.okd-2022-03-07-131213.tar.gz
[root@bastion ~]# wget https://github.com/openshift/okd/releases/download/4.10.0-0.okd-2022-03-07-131213/openshift-client-linux-4.10.0-0.okd-2022-03-07-131213.tar.gz
[root@bastion ~]# tar xvfz openshift-install-linux-4.10.0-0.okd-2022-03-07-131213.tar.gz
README.md
openshift-install
[root@bastion ~]# tar xvfz openshift-client-linux-4.10.0-0.okd-2022-03-07-131213.tar.gz
README.md
oc
kubectl
/usr/local/bin/ 폴더에 실행화일을 이동하고 실행 권한을 준다.
[root@bastion ~]# mv oc kubectl openshift-install /usr/local/bin/
[root@bastion ~]# chmod 755 /usr/local/bin/{oc,kubectl,openshift-install}
[root@bastion ~]# /usr/local/bin/oc version
Client Version: 4.10.0-0.okd-2022-03-07-131213
OKD를 설치 하는 과정에서 redhat 의 private registry 에서 이미지를 다운을 받는다.
private registry 에 접속하기 위해서는 pull secret이 필요하고 아래 redhat 사이트에 접속을 하여 가입을 하고 pull secret를 다운 받는다.
접속하기 : https://cloud.redhat.com/openshift/create/local
pull secret의 포맷은 아래와 같다.
{"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGV2K29jbV9hY2Nlc3NfODA3Yjc0MDgzODBmNDg4NmE-------zSDdNSTFGRzFPN1hBODRSQjZONTFYSw==","email":"[email protected]"},"quay.io":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGV2K29jbV9hY2Nlc3NfODA3Yjc0MDgzODBmNDg4NmExYTE4YWVjMzZjZDc3ZTE6WEhHTVhYWlAzMjEyR0tJUFRaN0Y3MUNSWVRHUEVMM1BBRThQUExWSlEzSDdNSTFGRzFPN1hBODRSQjZONTFYSw==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"fHVoYy1wb29sLTYxMTBkMjQyLTQ3MjgtNDBhYS05Zjc5LTdjZTMyNDUyNzJlYzpleUpoYkdjaU9pSlNVelV4TWlKOS5leUp6ZFdJaU9pSXdaamMyTkRRMU4yVXdNREUwT0dJek9EZGpNVGMyTW1GaE9ERTBORGcwTVNKOS5aNXZrTnNTb3NlQ1NfTDZOQ1NiN0I5NTVkUkR4NmVsUWpkZGMwRl9-------wSlRiX0hrUVVoUHE5dEthOVZDOWtsS2tCNVViSEF3OXByNTdnR25QSzFRNzJycDI4NA==","email":"[email protected]"},"registry.redhat.io":{"auth":"-------Da3JnS0xUMEJqYks5Y0FoR0JfRjBZMjZEa3lCOHF2SkdRSGE2VklOQ1Y3dnpRTU1GU3lHeWdZQ2VkWjFSWk9PRUQwSlRiX0hrUVVoUHE5dEthOVZDOWtsS2tCNVViSEF3OXByNTdnR25QSzFRNzJycDI4NA==","email":"[email protected]"}}}
manifest 와 ignition 화일을 생성하기 위하여 install-config.yaml를 만든다.
pull secret 항목은 위에서 다운받은 redhat pull secret을 복사하고 ssh 키는 bastion에서 생성한 public key를 가져와서 붙여 넣는다.
[root@bastion ~]# mkdir -p okd4
[root@bastion ~]# vi ./okd4/install-config.yaml
apiVersion: v1
baseDomain: ktdemo.duckdns.org # 베이스 도메인. 본인의 공유기 도메인
compute:
- hyperthreading: Enabled
name: worker
replicas: 0 # 워커 노드 수
controlPlane:
hyperthreading: Enabled
name: master
replicas: 1 # 마스터 노드 수 ( master 노드는 기본은 worker node 혼용으로 설정 )
metadata:
name: okd4 # OKD Cluster 이름이며 base domain 앞에 추가가 된다.
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
pullSecret: '{"auths":{"cloud.openshift.com":{"auth":"b3BlbnNoaWZ0LXJlbGVhc2UtZGV2K29jbV9hY2Nlc3NfODA3Yjc0MDgzODBmNDg4NmExYTE4YWVjMzZjZDc3ZTE6WEhHTVhYWlAzMjEyR0tJUFRaN0Y3MUNSWVRHUEVMM1BBRThQUExWSlE----CNVViSEF3OXByNTdnR25QSzFRNzJycDI4NA==","email":"[email protected]"}}}'
sshKey: 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCWJkGLkamR8mtMhNPUC7fY5lzXZFzGpEFftZwkFoXCBWmF------R8chyf60CkHOTFHVqsUHNs3JdkvmJBPWrE3FN3w== [email protected]'
install-config.yaml 파일은 manifest 와 ignition 생성후 삭제가 되기때문에 백업 폴더를 생성하여 저장한다.
[root@bastion ~]# mkdir backup
[root@bastion ~]# cp ./okd4/install-config.yaml ./backup/install-config.yaml
manifest 화일을 생성한다.
openshift 폴더가 생성이되고 master/worker 노드 설정 화일들이 있어 여기 값을 수정하여 role 을 할당 할 수 있다.
[root@bastion ~]# /usr/local/bin/openshift-install create manifests --dir=okd4
INFO Consuming Install Config from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
INFO Manifests created in: okd4/manifests and okd4/openshift
[root@bastion ~]# ls ./okd4
manifests openshift
[root@bastion ~]# ls ./okd4/openshift
99_kubeadmin-password-secret.yaml 99_openshift-machineconfig_99-master-ssh.yaml
99_openshift-cluster-api_master-user-data-secret.yaml 99_openshift-machineconfig_99-worker-ssh.yaml
99_openshift-cluster-api_worker-user-data-secret.yaml openshift-install-manifests.yaml
coreos 설정을 위한 ignition 화일을 생성한다.
역할 별로 ign 파일이 생성이 되고 auth 폴더 안에는 연결 정보 (kubeconfig) 가 저장이 되어 있다.
[root@bastion ~]# /usr/local/bin/openshift-install create ignition-configs --dir=okd4
INFO Consuming Master Machines from target directory
INFO Consuming Openshift Manifests from target directory
INFO Consuming Worker Machines from target directory
INFO Consuming Common Manifests from target directory
INFO Consuming OpenShift Install (Manifests) from target directory
INFO Ignition-Configs created in: okd4 and okd4/auth
[root@bastion ~]# ls ./okd4/
auth bootstrap.ign master.ign metadata.json worker.ign
bastion web 서버에 ign 화일을 복사하고 apache web server를 재 기동한다.
[root@bastion ~]# mkdir /var/www/html/ign
[root@bastion ~]# cp ./okd4/*.ign /var/www/html/ign/
[root@bastion ~]# chmod 777 /var/www/html/ign/*.ign
[root@bastion ~]# systemctl restart httpd
proxmox 서버에 coreos 기반의 bootstrap 용 서버를 생성한다. ( 생성 과정은 생략 )
<br.>
- 다운로드 위치 : https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 에서
- Version : fedora-coreos-35.20220410.3.1-live.x86_64.iso
처음 기동인 되면 자동 로그인 이 되고 proxmox 에서 OS 콘솔로 접속이 가능하다.
- proxmox 콘솔은 웹이기 때문애 붙여 넣기가 안된다.
먼저 네트웍을 설정을 하기 위해서 network device 이름을 확인한다.
[root@localhost core]# nmcli device
ens18 ethernet connected Wired connection 1
lo loopback unmanaged --
connection 이름을 ens18로 생성한다.
[root@localhost core]# nmcli connection add type ethernet autoconnect yes con-name ens18 ifname ens18
네트웍 설정을 한다.
- ip : bootstrap 서버는 192.168.1.128/24 로 설정한다.
- dns : bastion 서버는 192.168.1.247 로 설정한다.
- gateway : 공유기 ip 인 192.168.1.1 로 설정한다. ( bastion 서버 ip로 해도 상관 없음 )
- dns-search : okd4.ktdemo.duckdns.org 로 설정 ( cluster 이름 + . + base Domain)
[root@localhost core]# nmcli connection modify ens18 ipv4.addresses 192.168.1.128/24 ipv4.method manual
[root@localhost core]# nmcli connection modify ens18 ipv4.dns 192.168.1.247
[root@localhost core]# nmcli connection modify ens18 ipv4.gateway 192.168.1.1
[root@localhost core]# nmcli connection modify ens18 ipv4.dns-search okd4.ktdemo.duckdns.org
설치를 시작하기 전에 bastion 서버의 /etc/resolv.conf 화일의 nameserver 설정을 bastion 서버의 ip인 192.168.1.247 로 변경한다.
- 변경하지 않으면 EOF 에러등 다양한 에러가 발생한다.
- 이제 부터는 내부 네트웍만 필요하다.
아래 명령어로 bootstrap 서버 설치를 시작한다.
- --copy-network 의미는 설치 될때 위에서 설정한 네트웍 정보로 설치가 된다.
[root@localhost core]# coreos-installer install /dev/sda -I http://192.168.1.247:8080/ign/bootstrap.ign --insecure-ignition --copy-network
Installing Fedora CoreOS 35.20220410.3.1 x86_64 (512-byte sectors)
> Read disk 2.5 GiB/2.5 GiB (100%)
Writing Ignition config
Copying networking configuration from /etc/NetworkManager/system-connections/
Copying /etc/NetworkManager/system-connections/ens18.nmconnection to installed system
Copying /etc/NetworkManager/system-connections/ens18-37d95251-8740-4053-a3ee-99ef2a2063c2.nmconnection to installed system
Install complete.
설치가 완료 되면 재기동 한다.
[root@localhost core]# reboot now
bastion 서버에서 bootsrap 서버로 로그인을 해본다.
[root@bastion config]# ssh [email protected]
The authenticity of host '192.168.1.128 (192.168.1.128)' can't be established.
ECDSA key fingerprint is SHA256:7+MOJdsnC548GUrGZxYKTnvhG94F+2kyGa2bpSH6eA8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.1.128' (ECDSA) to the list of known hosts.
Red Hat Enterprise Linux CoreOS 48.84.202109241901-0
Part of OpenShift 4.8, RHCOS is a Kubernetes native operating system
managed by the Machine Config Operator (`clusteroperator/machine-config`).
WARNING: Direct SSH access to machines is not recommended; instead,
make configuration changes via `machineconfig` objects:
https://docs.openshift.com/container-platform/4.8/architecture/architecture-rhcos.html
---
This is the bootstrap node; it will be destroyed when the master is fully up.
The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g.
journalctl -b -f -u release-image.service -u bootkube.service
아래 명령어를 사용하여 설치 로그를 확인한다.
[core@localhost ~]$ journalctl -b -f -u release-image.service -u bootkube.service
-- Logs begin at Mon 2023-08-07 05:48:45 UTC. --
Aug 07 05:50:23 localhost bootkube.sh[2232]: wrote /assets/ingress-operator-manifests/cluster-ingress-00-namespace.yaml
Aug 07 05:50:24 localhost bootkube.sh[2232]: Rendering MCO manifests...
Aug 07 05:50:32 localhost bootkube.sh[2232]: I0807 05:50:32.203425 1 bootstrap.go:86] Version: v4.8.0-202110020139.p0.git.6cf1670.assembly.stream-dirty (6cf167014583c41e80407eea5a4eda644f420d26)
Aug 07 05:50:32 localhost bootkube.sh[2232]: I0807 05:50:32.206504 1 bootstrap.go:188] manifests/machineconfigcontroller/controllerconfig.yaml
Aug 07 05:50:32 localhost bootkube.sh[2232]: I0807 05:50:32.208403 1 bootstrap.go:188] manifests/master.machineconfigpool.yaml
Aug 07 05:50:32 localhost bootkube.sh[2232]: I0807 05:50:32.208662 1 bootstrap.go:188] manifests/worker.machineconfigpool.yaml
Aug 07 05:50:32 localhost bootkube.sh[2232]: I0807 05:50:32.208867 1 bootstrap.go:188] manifests/bootstrap-pod-v2.yaml
Aug 07 05:50:32 localhost bootkube.sh[2232]: I0807 05:50:32.209096 1 bootstrap.go:188] manifests/machineconfigserver/csr-bootstrap-role-binding.yaml
Aug 07 05:50:32 localhost bootkube.sh[2232]: I0807 05:50:32.209326 1 bootstrap.go:188] manifests/machineconfigserver/kube-apiserver-serving-ca-configmap.yaml
Aug 07 05:50:32 localhost bootkube.sh[2232]: Rendering CCO manifests...
Aug 07 05:50:39 localhost bootkube.sh[2232]: time="2023-08-07T05:50:39Z" level=info msg="Rendering files to /assets/cco-bootstrap"
Aug 07 05:50:39 localhost bootkube.sh[2232]: time="2023-08-07T05:50:39Z" level=info msg="Writing file: /assets/cco-bootstrap/manifests/cco-cloudcredential_v1_operator_config_custresdef.yaml"
Aug 07 05:50:39 localhost bootkube.sh[2232]: time="2023-08-07T05:50:39Z" level=info msg="Writing file: /assets/cco-bootstrap/manifests/cco-cloudcredential_v1_credentialsrequest_crd.yaml"
Aug 07 05:50:39 localhost bootkube.sh[2232]: time="2023-08-07T05:50:39Z" level=info msg="Writing file: /assets/cco-bootstrap/manifests/cco-namespace.yaml"
Aug 07 05:50:39 localhost bootkube.sh[2232]: time="2023-08-07T05:50:39Z" level=info msg="Writing file: /assets/cco-bootstrap/manifests/cco-operator-config.yaml"
Aug 07 05:50:39 localhost bootkube.sh[2232]: time="2023-08-07T05:50:39Z" level=info msg="Rendering static pod"
Aug 07 05:50:39 localhost bootkube.sh[2232]: time="2023-08-07T05:50:39Z" level=info msg="writing file: /assets/cco-bootstrap/bootstrap-manifests/cloud-credential-operator-pod.yaml"
Aug 07 05:50:40 localhost bootkube.sh[2232]: https://localhost:2379 is healthy: successfully committed proposal: took = 8.685028ms
Aug 07 05:50:40 localhost bootkube.sh[2232]: Starting cluster-bootstrap...
Aug 07 05:50:46 localhost bootkube.sh[2232]: Starting temporary bootstrap control plane...
Aug 07 05:50:46 localhost bootkube.sh[2232]: Waiting up to 20m0s for the Kubernetes API
Aug 07 05:50:47 localhost bootkube.sh[2232]: Still waiting for the Kubernetes API: Get "https://localhost:6443/readyz": dial tcp [::1]:6443: connect: connection refused
bootstrap 서버에서 exit 하여 bastion 서버로 돌아오고 아래 명령어를 사용하여 모니터링 한다.
아래 API 버전까지 나와야 정상이고 대부분 the Kubernetes API 서버 접속시 에러가 많이 발생하는데 DNS 서버인 Bastion 서버의 ip로 설정이 안되어 있는 경우가 많다.
[root@bastion shclub]# /usr/local/bin/openshift-install --dir=/root/okd4 wait-for bootstrap-complete --log-level=debug
DEBUG OpenShift Installer 4.10.0-0.okd-2022-03-07-131213
DEBUG Built from commit 3b701903d96b6375f6c3852a02b4b70fea01d694
INFO Waiting up to 20m0s (until 10:36PM) for the Kubernetes API at https://api.okd4.ktdemo.duckdns.org:6443...
INFO API v1.23.3-2003+e419edff267ffa-dirty up
INFO Waiting up to 30m0s (until 10:46PM) for bootstrapping to complete...
에러 가 발생하지 않고 Bootstrap status: complete
메시지가 나오면 Bootstrap 서버가 정상 설치가 되고 master 노드 생성을 시작합니다.
[root@bastion ~]# /usr/local/bin/openshift-install --dir=/root/okd4 wait-for bootstrap-complete --log-level=debug
DEBUG OpenShift Installer 4.10.0-0.okd-2022-03-07-131213
DEBUG Built from commit 3b701903d96b6375f6c3852a02b4b70fea01d694
INFO Waiting up to 20m0s (until 9:58AM) for the Kubernetes API at https://api.okd4.ktdemo.duckdns.org:6443...
INFO API v1.23.3-2003+e419edff267ffa-dirty up
INFO Waiting up to 30m0s (until 10:08AM) for bootstrapping to complete...
DEBUG Bootstrap status: complete
vmware 에 coreos 기반의 master/worker 겸용 서버를 생성한다. ( 생성 과정은 생략 )
<br.>
- 다운로드 위치 : https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 에서
- Version : fedora-coreos-35.20220410.3.1-live.x86_64.iso
처음 기동인 되면 자동 로그인 이 되고 vmware 에서 OS 콘솔로 접속이 가능하다.
- vmware 콘솔은 붙여 넣기가 가능하다.
먼저 네트웍을 설정을 하기 위해서 network device 이름을 확인한다.
[root@localhost core]# nmcli device
DEVICE TYPE STATE CONNECTION
ens160 ethernet connected Wired connection 1
lo loopback unmanaged --
connection 이름을 ens160으로 생성한다.
[root@localhost core]# nmcli connection add type ethernet autoconnect yes con-name ens160 ifname ens160
네트웍 설정을 한다.
- ip : okd-1 서버는 192.168.1.146/24 로 설정한다.
- dns : bastion 서버는 192.168.1.247 로 설정한다.
- gateway : 공유기 ip 인 192.168.1.1 로 설정한다. ( bastion 서버 ip로 해도 상관 없음 )
- dns-search : okd4.ktdemo.duckdns.org 로 설정 ( cluster 이름 + . + base Domain)
[root@localhost core]# nmcli connection modify ens160 ipv4.addresses 192.168.1.149/24 ipv4.method manual
[root@localhost core]# nmcli connection modify ens160 ipv4.dns 192.168.1.247
[root@localhost core]# nmcli connection modify ens160 ipv4.gateway 192.168.1.1
[root@localhost core]# nmcli connection modify ens160 ipv4.dns-search okd4.ktdemo.duckdns.org
master 노드 ( okd-1 ) 설치를 한다.
[root@localhost core]# coreos-installer install /dev/sda -I http://192.168.1.247:8080/ign/master.ign --insecure-ignition --copy-network
Installing Fedora CoreOS 35.20220410.3.1 x86_64 (512-byte sectors)
> Read disk 2.5 GiB/2.5 GiB (100%)
Writing Ignition config
Copying networking configuration from /etc/NetworkManager/system-connections/
Copying /etc/NetworkManager/system-connections/ens160.nmconnection to installed system
Install complete.
hostname을 설정 하고 재기동 한다.
[root@localhost core]# nmcli connection up ens160
[root@localhost core]# hostnamectl set-hostname okd-1.okd4.ktdemo.duckdns.org
[root@localhost core]# reboot now
bastion 서버에서 아래 명령어로 모니터링을 하고 It is now safe to remove the bootstrap resources
가 나오면 정상적으로 master 노드가 설치가 완료 됩니다.
[root@bastion ~]# /usr/local/bin/openshift-install --dir=/root/okd4 wait-for bootstrap-complete --log-level=debug
DEBUG OpenShift Installer 4.10.0-0.okd-2022-03-07-131213
DEBUG Built from commit 3b701903d96b6375f6c3852a02b4b70fea01d694
INFO Waiting up to 20m0s (until 9:58AM) for the Kubernetes API at https://api.okd4.ktdemo.duckdns.org:6443...
INFO API v1.23.3-2003+e419edff267ffa-dirty up
INFO Waiting up to 30m0s (until 10:08AM) for bootstrapping to complete...
DEBUG Bootstrap status: complete
INFO It is now safe to remove the bootstrap resources
DEBUG Time elapsed per stage:
DEBUG Bootstrap Complete: 7m56s
INFO Time elapsed: 7m56s
OKD 클러스터가 정상적으로 구성되었기 때문에 bastion 서버에서 HAProxy가 bootstrap으로 LB (Load Balancing) 되지 않도록 수정 후 서비스를 재시작합니다.
[root@bastion config]# vi /etc/haproxy/haproxy.cfg
backend openshift_api_backend
mode tcp
balance source
#server bootstrap 192.168.1.128:6443 check
server okd-1 192.168.1.146:6443 check
backend ocp_machine_config_server_backend
mode tcp
balance source
#server bootstrap 192.168.1.128:22623 check
server okd-1 192.168.1.146:22623 check
haproxy 를 재기동 한다.
[root@bastion config]# systemctl restart haproxy
.bash_profile 에 연결 정보를 설정합니다.
[root@bastion ~]# vi ~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
# /usr/local/bin 추가
PATH=$PATH:$HOME/bin:/usr/local/bin
export PATH
# Added for okd4 : 아래 구문 추가
export KUBECONFIG=/root/okd4/auth/kubeconfig
source 명령어로 profile 를 적용하고 node를 조회해 봅니다.
정상적으로 node가 조회가 됩니다.
[root@bastion ~]# source ~/.bash_profile
[root@bastion ~]# oc get nodes
NAME STATUS ROLES AGE VERSION
okd-1.okd4.ktdemo.duckdns.org Ready master,worker 18m v1.23.3+759c22b
cluster componet 조회를 해봅니다.
모든 Cluster Operator가 True / False / False 여야 정상입니다.
[root@bastion ~]# oc get co
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
authentication 4.10.0-0.okd-2022-03-07-131213 True True False 64s OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.
baremetal 4.10.0-0.okd-2022-03-07-131213 True False False 14m
cloud-controller-manager 4.10.0-0.okd-2022-03-07-131213 True False False 18m
cloud-credential 4.10.0-0.okd-2022-03-07-131213 True False False 17m
cluster-autoscaler 4.10.0-0.okd-2022-03-07-131213 True False False 14m
config-operator 4.10.0-0.okd-2022-03-07-131213 True False False 16m
console 4.10.0-0.okd-2022-03-07-131213 True False False 64s
csi-snapshot-controller 4.10.0-0.okd-2022-03-07-131213 True False False 16m
dns 4.10.0-0.okd-2022-03-07-131213 True False False 14m
etcd 4.10.0-0.okd-2022-03-07-131213 True False False 14m
image-registry 4.10.0-0.okd-2022-03-07-131213 True False False 7m6s
ingress 4.10.0-0.okd-2022-03-07-131213 True False False 13m
insights 4.10.0-0.okd-2022-03-07-131213 True False False 9m58s
kube-apiserver 4.10.0-0.okd-2022-03-07-131213 True False False 9m29s
kube-controller-manager 4.10.0-0.okd-2022-03-07-131213 True False False 13m
kube-scheduler 4.10.0-0.okd-2022-03-07-131213 True False False 9m38s
kube-storage-version-migrator 4.10.0-0.okd-2022-03-07-131213 True False False 16m
machine-api 4.10.0-0.okd-2022-03-07-131213 True False False 15m
machine-approver 4.10.0-0.okd-2022-03-07-131213 True False False 16m
machine-config 4.10.0-0.okd-2022-03-07-131213 True False False 14m
marketplace 4.10.0-0.okd-2022-03-07-131213 True False False 14m
monitoring 4.10.0-0.okd-2022-03-07-131213 True False False 52s
network 4.10.0-0.okd-2022-03-07-131213 True False False 16m
node-tuning 4.10.0-0.okd-2022-03-07-131213 True False False 15m
openshift-apiserver 4.10.0-0.okd-2022-03-07-131213 True False False 5m27s
openshift-controller-manager 4.10.0-0.okd-2022-03-07-131213 True False False 15m
openshift-samples 4.10.0-0.okd-2022-03-07-131213 True False False 6m53s
operator-lifecycle-manager 4.10.0-0.okd-2022-03-07-131213 True False False 15m
operator-lifecycle-manager-catalog 4.10.0-0.okd-2022-03-07-131213 True False False 15m
operator-lifecycle-manager-packageserver 4.10.0-0.okd-2022-03-07-131213 True False False 8m48s
service-ca 4.10.0-0.okd-2022-03-07-131213 True False False 16m
storage 4.10.0-0.okd-2022-03-07-131213 True False False 16m
bastion 서버에서 아래 명령어로 설치 확인을 하며 kubeadmin 비밀번호를 알수 있습니다.
향후 계정을 신규로 생성하여 cluster admin 권한을 준후 kubeadmin은 삭제합니다.
- cluster admin 계정을 먼저 생성하지 않고 kubeadmin 삭제하면 cluster 재생성 해야합니다.
[root@bastion ~]# openshift-install --dir=/root/okd4 wait-for install-complete
INFO Waiting up to 40m0s (until 10:35AM) for the cluster at https://api.okd4.ktdemo.duckdns.org:6443 to initialize...
INFO Waiting up to 10m0s (until 10:05AM) for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/okd4/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.okd4.ktdemo.duckdns.org
INFO Login to the console with user: "kubeadmin", and password: "HeJDB-***-****b****-4hb4q"
INFO Time elapsed: 0s
admin으로 먼저 접속을 해봅니다.
[root@bastion ~]# oc login -u system:admin
Logged into "https://api.okd4.ktdemo.duckdns.org:6443" as "system:admin" using existing credentials.
You have access to 65 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
okd 계정을 생성을 하고 비밀번호를 htpasswd 방식으로 설정하기 위해 httpd-tools 라이브러리를 설치합니다.
[root@bastion ~]# dnf install -y httpd-tools
Last metadata expiration check: 1:51:05 ago on Mon 07 Aug 2023 02:28:18 AM EDT.
Package httpd-tools-2.4.37-54.module_el8.8.0+1256+e1598b50.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
shclub 라는 이름으로 계정을 만들고 비밀번호도 같이 입력합니다.
[root@bastion ~]# touch htpasswd
[root@bastion ~]# htpasswd -Bb htpasswd shclub 'S#123************'
Adding password for user shclub
[root@bastion ~]# cat htpasswd
shclub:$2y$05$kjWLoagesIMy0.**************
htpasswd 라는 이름으로 kubernetes secret 을 생성합니다.
[root@bastion ~]# oc --user=admin create secret generic htpasswd --from-file=htpasswd -n openshift-config
[root@bastion ~]# oc get secret -n openshift-config
NAME TYPE DATA AGE
builder-dockercfg-lkfwf kubernetes.io/dockercfg 1 20m
builder-token-n4j9g kubernetes.io/service-account-token 4 20m
builder-token-sng5g kubernetes.io/service-account-token 4 20m
default-dockercfg-cdtkb kubernetes.io/dockercfg 1 20m
default-token-mxlks kubernetes.io/service-account-token 4 20m
default-token-s2w2r kubernetes.io/service-account-token 4 27m
deployer-dockercfg-s2wbh kubernetes.io/dockercfg 1 20m
deployer-token-skjsq kubernetes.io/service-account-token 4 20m
deployer-token-wf8wn kubernetes.io/service-account-token 4 20m
etcd-client kubernetes.io/tls 2 27m
etcd-metric-client kubernetes.io/tls 2 27m
etcd-metric-signer kubernetes.io/tls 2 27m
etcd-signer kubernetes.io/tls 2 27m
htpasswd Opaque 0 8s
initial-service-account-private-key Opaque 1 27m
pull-secret kubernetes.io/dockerconfigjson 1 27m
webhook-authentication-integrated-oauth Opaque 1 24m
OKD Cluster 에 생성하기 위해 Local Password
라는 이름으로 identityProviders 를 생성하고 replace 명령어를 사용하여 적용합니다.
[root@bastion ~]# vi oauth-config.yaml
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: Local Password
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpasswd
[root@bastion ~]# oc replace -f oauth-config.yaml
oauth.config.openshift.io/cluster replaced
브라우저를 통해 okd web console 에 접속하기 위해 url을 확인합니다.
[root@bastion ~]# oc whoami --show-console
https://console-openshift-console.apps.okd4.ktdemo.duckdns.org
웹브라우저와 외부에서 접속하기 위해서는 공유기에서 포트포워딩을 해주어 합니다. ( 6443,443 포트)
또한, bastion 서버의 nameserver 를 공유기의 ip로 아래과 같이 변경한다.
[root@bastion ~]# vi /etc/resolv.conf
# Generated by NetworkManager
search okd4.ktdemo.duckdns.org
nameserver 192.168.1.1
웹 브라우저에서 접속해 보면 정상적인 okd 로그인 화면이 나옵니다.
bastion 서버에서 해당 유저로 로그인 합니다.
[root@bastion ~]# oc login https://api.okd4.ktdemo.duckdns.org:6443 -u shclub -p N********9876! --insecure-skip-tls-verify
Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
계정을 추가하는 경우에는 먼저 htpasswd secret에서 기존 정보를 가져온다.
[root@bastion ]# oc get secret htpasswd -ojsonpath={.data.htpasswd} -n openshift-config | base64 --decode > htpasswd
htpasswd 화일에 기존 계정의 값이 있고 아래와 같이 신규 계정을 추가한다.
[root@bastion ~]# htpasswd -Bb htpasswd edu1 'S#123************'
Adding password for user edu1
이제 적용하고 web console에서 다시 접속해 본다.
[root@bastion argocd]# oc --user=admin create secret generic htpasswd --from-file=htpasswd -n openshift-config --dry-run=client -o yaml | oc replace -f -
secret/htpasswd replaced
shcub 라는 namespace를 생성합니다.
[root@bastion ~]# oc new-project shclub
해당 namespace 에 pod를 생성하면 시작이된 후 권한이 없어서 곧 에러가 발생합니다.
admin 으로 로그인 한 후 anyuid 권한을 할당합니다.
아래와 같은 로그 발생시에는 context를 변경해야 합니다.
[root@bastion ~]# oc login -u system:admin
error: username system:admin is invalid for basic auth
context를 조회를 하면 * 표시가 현재 사용하는 context 입니다.
[root@bastion ~]# oc config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* /api-okd4-ktdemoduckdns-org:6443/root api-okd4-ktdemoduckdns-org:6443 root/api-okd4-ktdemoduckdns-org:6443
/api-okd4-ktdemoduckdns-org:6443/shclub api-okd4-ktdemoduckdns-org:6443 shclub/api-okd4-ktdemoduckdns-org:6443
admin okd4 admin
default/api-okd4-ktdemo-duckdns-org:6443/system:admin api-okd4-ktdemo-duckdns-org:6443 system:admin/api-okd4-ktdemo-duckdns-org:6443 default
use-context 구문을 사용하여 현재 사용하는 context 로 변경합니다.
[root@bastion ~]# kubectl config use-context default/api-okd4-ktdemo-duckdns-org:6443/system:admin
Switched to context "default/api-okd4-ktdemo-duckdns-org:6443/system:admin".
[root@bastion ~]# oc login -u system:admin
Logged into "https://api.okd4.ktdemo.duckdns.org:6443" as "system:admin" using existing credentials.
You have access to 65 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
shclub namespace로 접속하기 위해 default service account 에 anyuid 권한을 할당합니다.
oc adm policy add-scc-to-user anyuid system:serviceaccount:<NAMESPACE>:default
[root@bastion ~]# oc adm policy add-scc-to-user anyuid system:serviceaccount:shclub:default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "default"
admin 권한을 추가로 부여합니다.
oc adm policy add-role-to-user admin <계정> -n <NAMESPACE>
[root@bastion ~]# oc adm policy add-role-to-user admin shclub -n shclub
clusterrole.rbac.authorization.k8s.io/admin added: "shclub"
권한 제거는 아래와 같다.
oc adm policy remove-role-from-user <role> <username>
Pod를 하나 생성해 봅니다.
[root@bastion ~]# kubectl run nginx --image=nginx
pod/nginx created
[root@bastion ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 4s
[root@bastion ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 8s
정상적으로 Running 하는 것을 확인 할 수 있습니다.
kubeadmin 대신 cluster-admin 생성은 root 라는 계정을 만들고 cluster-admin권한을 할당합니다.
[root@bastion ~]# oc adm policy add-cluster-role-to-user cluster-admin root
clusterrole.rbac.authorization.k8s.io/cluster-admin added: "root"
coreos 를 ssh 대신 패스워드 방식으로 접속하기 위해서는 먼저 ssh로 로그인을 하고 super user 권한을 확득합니다.
[core@localhost ~]$ sudo su
core 계정에 비밀번호를 생성합니다.
[root@localhost core]# passwd core
Changing password for user core.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
/etc/ssh/sshd_config.d 폴더로 이동합니다.
[root@localhost core]# cd /etc/ssh/sshd_config.d
[root@localhost ssh]# ls
moduli ssh_config.d ssh_host_ecdsa_key.pub ssh_host_ed25519_key.pub ssh_host_rsa_key.pub
ssh_config ssh_host_ecdsa_key ssh_host_ed25519_key ssh_host_rsa_key sshd_config
20-enable-passwords.conf 를 생성한다.
[root@okd-1 sshd_config.d]# ls
10-insecure-rsa-keysig.conf 40-disable-passwords.conf 40-ssh-key-dir.conf 50-redhat.conf
[root@localhost ssh]# vi 20-enable-passwords.conf
아래와 같이 값을 생성하고 저장한다.
PasswordAuthentication yes
sshd 데몬을 재기동한다.
[root@localhost ssh]# systemctl restart sshd
이제 어느 곳에서든 id/password로 접속 가능하다.
설치를 위해서 cluster admin 권한이 있어야 한다.
참고 : https://github.com/shclub/cloudtty
jakelee@jake-MacBookAir cloudshell % oc login https://api.okd4.ktdemo.duckdns.org:6443 -u root -p Sh********0 --insecure-skip-tls-verify
WARNING: Using insecure TLS client config. Setting this option is not supported!
Login successful.
You have access to 68 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "shclub".
jakelee@jake-MacBookAir ~ % helm install cloudtty-operator --version 0.5.0 cloudtty/cloudtty
NAME: cloudtty-operator
LAST DEPLOYED: Fri Aug 11 13:19:49 2023
NAMESPACE: shclub
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing cloudtty.
Your release is named cloudtty-operator.
To learn more about the release, try:
$ helm status cloudtty-operator
$ helm get all cloudtty-operator
Documention: https://github.com/cloudtty/cloudtty/-/blob/main/README.md
jakelee@jake-MacBookAir ~ % kubectl wait deployment cloudtty-operator-controller-manager --for=condition=Available=True
deployment.apps/cloudtty-operator-controller-manager condition met
root/.kube 폴더에 kubeconfig 를 생성한다.
[root@bastion .kube]# vi kubeconfig
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://api.okd4.ktdemoduckdns.org:6443
name: api-okd4-ktdemoduckdns-org:6443
contexts:
- context:
cluster: api-okd4-ktdemoduckdns-org:6443
user: shclub/api-okd4-ktdemoduckdns-org:6443
name: /api-okd4-ktdemoduckdns-org:6443/shclub
current-context: /api-okd4-ktdemoduckdns-org:6443/root
kind: Config
preferences: {}
users:
- name: shclub/api-okd4-ktdemoduckdns-org:6443
user:
token: sha256~********cnLskhIKbwmYwoKAuZ9sowsvSZTsiU
[root@bastion .kube]# ls -al
total 32
drwxr-x--- 3 root root 37 Aug 9 10:44 .
dr-xr-x---. 11 root root 4096 Aug 20 21:09 ..
drwxr-x--- 4 root root 35 Aug 9 09:53 cache
-rw-r----- 1 root root 25107 Aug 9 10:44 kubeconfig
my-kubeconfig 라는 이름으로 secret을 생성한다.
- 접속 정보를 제한.
[root@bastion ~]# kubectl create secret generic my-kubeconfig --from-file=/root/.kube/config
secret/my-kubeconfig created
Dockerfile 을 생성하고 custom 이미지를 생성한다.
https://github.com/shclub/cloudshell 의 Dockerfile를 사용하여 GitHub Action으로 생성한다.
- openshift client 는 Alpine Docker image 에서는 동작하지 않기 때문에 apk add gcompat 을 추가한다. ( 이미 추가됨 )
cloudshell 을 생성하기 위한 yaml 화일을 생성하고 적용한다.
[root@bastion cloudshell]# cat cloud_shell.yaml
apiVersion: cloudshell.cloudtty.io/v1alpha1
kind: CloudShell
metadata:
name: okd-shell
spec:
secretRef:
name: "my-kubeconfig"
image: shclub/cloudshell:master
# commandAction: "kubectl -n shclub get po && bash"
commandAction: "bash"
exposureMode: "ClusterIP"
# exposureMode: "NodePort"
ttl: 555555555555 # ttl 설정된 시간 만큼 pod 유지
once: false
[root@bastion cloudshell]# kubectl apply -f cloud_shell.yaml -n shclub
cloudshell.cloudshell.cloudtty.io/okd-shell created
아래 명령어로 모니터링을 하고 Ready
상태가 되면 정상적으로 생성이 된것 입니다.
jakelee@jake-MacBookAir ~ % kubectl get cloudshell -w
NAME USER COMMAND TYPE URL PHASE AGE
okd-shell bash ClusterIP 0s
okd-shell bash ClusterIP 0s
okd-shell bash ClusterIP CreatedJob 0s
okd-shell bash ClusterIP 172.30.180.191:7681 CreatedRouteRule 0s
okd-shell bash ClusterIP 172.30.180.191:7681 Ready 3s
서비스 이름을 확인한다.
[root@bastion cloudshell]# kubectl get po -n shclub
NAME READY STATUS RESTARTS AGE
cloudshell-okd-shell-llr7q 1/1 Running 0 5s
cloudtty-operator-controller-manager-574c45b9df-zg7cf 1/1 Running 0 4m30s
route 생성시 주의 사항은 tls option 설정을 아래와 같이 해야함. (Allow
, edge
)
[root@bastion cloudshell]# cat cloudshell_route.yaml
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app : okd-shell
name: okd-shell
spec:
host: okd-shell-shclub.apps.okd4.ktdemo.duckdns.org
port:
targetPort: ttyd
tls:
insecureEdgeTerminationPolicy: Allow
termination: edge
# tls:
# insecureEdgeTerminationPolicy: Redirect
# termination: reencrypt
to:
kind: Service
name: cloudshell-okd-shell
weight: 100
wildcardPolicy: None
route를 생성한다.
[root@bastion cloudshell]# kubectl apply -f cloudshell_route.yaml -n shclub
[root@bastion cloudshell]# kubectl get route -n shclub
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
console okd-shell-shclub.apps.okd4.ktdemo.duckdns.org cloudshell-okd-shell https edge/Allow None
브라우저에서 route 인 https://okd-shell-shclub.apps.okd4.ktdemo.duckdns.org 로 접속해 보면 아래와 같이 shell이 생성 된것을 알수 있다.
[root@okd-1 core]# timedatectl set-timezone Asia/Seoul
[root@okd-1 core]# date
Mon Aug 21 09:35:38 KST 2023
linux는 jounal 로그파일 크기가 많이 차지하면 시간이 엄청 걸립니다.
systemd-journal로그 서비스는 커널의 로그, 초기 시스템 시작 단계, 시작 및 실행 중 시스템 데몬의 표준 출력 및 오류 메시지, syslog에서 로그를 수집하는 향상된 로그 관리 서비스입니다.
여러가지 로그를 수집하므로 부피가 엄청 늘어날수 있어 조정이 필요.
journal 로그 파일 사이즈 확인
[root@okd-1 core]# journalctl --disk-usage
Archived and active journals take up 3.9G in the file system.
500M 의 로그만 유지
[root@okd-1 core]# journalctl --vacuum-size=500M
Vacuuming done, freed 0B of archived journals from /var/log/journal.
Vacuuming done, freed 0B of archived journals from /run/log/journal.
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000012c8827-0006035fa10b7629.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000012e44c0-0006035fc78d6328.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-0000000001300140-0006035fed1a8a7e.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000131bdbe-0006036012a6a507.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-0000000001337a32-0006036038a6c081.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000013536d8-000603605e0b49c9.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000136f355-0006036083a417f6.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000138afcf-00060360a8f02351.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000013a6c6c-00060360ce699352.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000013c28e6-00060360f3f491ba.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000013de56f-00060361198158c4.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000013fa213-000603613ea5b7d6.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-0000000001415e89-0006036163819ecc.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-0000000001431afd-0006036188605326.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000144d776-00060361ad322dd9.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000146941a-00060361d1b0f781.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000148509e-00060361f6d10816.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000014a0d35-000603621b013193.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000014bc9cb-000603623f5a18fa.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000014d8636-00060362642d0d3c.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000014f42b0-00060362881420b1.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000150ff51-00060362abff7e08.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000152bbc5-00060362cfac3926.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-0000000001547830-00060362f3ef5bf9.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000015634c7-0006036317d2f23e.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000157f14c-000603633bc0af8e.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-000000000159ade2-000603635f64323d.journal (128.0M).
Deleted archived journal /var/log/journal/126376a91cbf47ffab943ee1bddd8398/system@26be5bf4a08e49c2beb6cbdceed653fc-00000000015b6a85-00060363827b4591.journal (128.0M).
Vacuuming done, freed 3.5G of archived journals from /var/log/journal/126376a91cbf47ffab943ee1bddd8398.
Vacuuming done, freed 0B of archived journals from /run/log/journal/cf886e957b874fa6b0133b1043b8a2c4.
에러 발생시 아래 처럼 로그를 다 삭제하고 재기동한다.
[root@okd-1 core]# rm -rf /var/log/journal/*
[root@okd-1 core]# systemctl restart systemd-journald.service
먼저 namespace를 2개를 생성한다.
oc new-project argocd
oc new-project argo-rollouts
권한을 설정한다.
oc adm policy add-scc-to-user anyuid -z default -n argocd
oc adm policy add-scc-to-user privileged -z default -n argocd
oc adm policy add-scc-to-user privileged -z argocd-redis -n argocd
oc adm policy add-scc-to-user privileged -z argocd-repo-server -n argocd
oc adm policy add-scc-to-user privileged -z argocd-dex-server -n argocd
oc adm policy add-scc-to-user privileged -z argocd-server -n argocd
oc adm policy add-scc-to-user privileged -z argocd-applicationset-controller -n argocd
oc adm policy add-scc-to-user privileged -z argocd-notifications-controller -n argocd
ArgoCD Manifest 화일을 다운 받는다. argo-cd.yaml 화일이 다운로드 된 것을 확인 할 수 있다.
curl https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml -o argo-cd.yaml
이번엔 Argo CD CLI 툴을 다운로드하고, PATH 경로에 추가한다.
VERSION=$(curl --silent "https://api.github.com/repos/argoproj/argo-cd/releases/latest" | grep '"tag_name"' | sed -E 's/.*"([^"]+)".*/\1/')
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/$VERSION/argocd-linux-amd64
chmod +x /usr/local/bin/argocd
k8s에 ArgoCD를 설치 합니다.
kubectl apply -n argocd -f argo-cd.yaml
k8s에 argo-rollouts을 설치 합니다.
argo-rollouts 은 blue/green 과 canary 배포 방식을 지원합니다.
kubectl apply -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yaml -n argo-rollouts
redis pod 가 안올라가는 경우는 event를 확인 하는데 아래와 같이 에러가 발생한다.
deployment에서 runAsUser 값을 ranges 값으로 변경한다.
spec.containers[0].securityContext.runAsUser: Invalid value: 999: must be in the ranges: [1000660000, 1000669999],
route 를 생성하기 위해 서비스 이름을 확인합니다.
[root@bastion argocd]# kubectl get svc -n argocd
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-applicationset-controller ClusterIP 172.30.169.175 <none> 7000/TCP,8080/TCP 28m
argocd-dex-server ClusterIP 172.30.253.118 <none> 5556/TCP,5557/TCP,5558/TCP 28m
argocd-metrics ClusterIP 172.30.215.134 <none> 8082/TCP 28m
argocd-notifications-controller-metrics ClusterIP 172.30.70.168 <none> 9001/TCP 28m
argocd-redis ClusterIP 172.30.211.26 <none> 6379/TCP 28m
argocd-repo-server ClusterIP 172.30.217.222 <none> 8081/TCP,8084/TCP 28m
argocd-server ClusterIP 172.30.172.143 <none> 80/TCP,443/TCP 28m
웹 브라우저에서 접속하기 위해 route 를 생성합니다.
[root@bastion argocd]# vi argocd_route.yaml
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app : argocd
name: argocd
spec:
host: argocd-argocd.apps.okd4.ktdemo.duckdns.org
port:
targetPort: http
tls:
insecureEdgeTerminationPolicy: Allow
termination: edge
to:
kind: Service
name: argocd-server
weight: 100
wildcardPolicy: None
적용하고 생성된 route를 확인합니다.
[root@bastion argocd]# kubectl apply -f argocd_route.yaml -n argocd
[root@bastion argocd]# kubectl get route -n argocd
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
argocd argocd-argocd.apps.okd4.ktdemo.duckdns.org argocd-server http edge/Allow None
route 에서 https 인증을 처리하는 경우, argocd 의 https 강제실행을 중지시켜야 할 필요가 있다.
아래 명령으로 강제 실행을 중지한다.
[root@bastion argocd]# kubectl edit cm argocd-cmd-params-cm -n argocd
apiVersion: v1
data:
server.insecure: "true"
kind: ConfigMap
아래 내용 추가
data:
server.insecure: "true"
argocd 2.4 버전 부터 terminal 기능을 사용 하려면 터미널 실행 기능을 설정한다.
[root@bastion argocd]# kubectl edit cm argocd-cm -n argocd
apiVersion: v1
data:
exec.enabled: "true"
kind: ConfigMap
아래 내용 추가
data:
exec.enabled: "true"
아래와 같이 터미널이 활성화가 된다.
argocd-server pod를 삭제하고 재기동 한다.
[root@bastion argocd]# kubectl get pod -n argocd
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 118m
argocd-applicationset-controller-6765c74d68-t5jdv 1/1 Running 0 118m
argocd-dex-server-7d974fc66b-ntz2d 1/1 Running 0 113m
argocd-notifications-controller-5df7fddfb7-gqs2w 1/1 Running 0 118m
argocd-redis-74b8bcc46b-hg22b 1/1 Running 0 110m
argocd-repo-server-6db877f7d9-d78qf 1/1 Running 0 112m
argocd-server-6c7df9df6b-85vn9 1/1 Running 0 115m
[root@bastion argocd]#
[root@bastion argocd]# kubectl delete po argocd-server-6c7df9df6b-85vn9 -n argocd
pod "argocd-server-6c7df9df6b-85vn9" deleted
최신 버전부터는 OKD에 설치시 속도가 느려지고 argocd-redis 서비스 연결 오류가 발생하는 데 이런 경우에는 network policy를 삭제 하면 된다.
[root@bastion argocd]# kubectl get networkpolicy -n argocd
NAME POD-SELECTOR AGE
argocd-application-controller-network-policy app.kubernetes.io/name=argocd-application-controller 114m
argocd-applicationset-controller-network-policy app.kubernetes.io/name=argocd-applicationset-controller 114m
argocd-dex-server-network-policy app.kubernetes.io/name=argocd-dex-server 114m
argocd-notifications-controller-network-policy app.kubernetes.io/name=argocd-notifications-controller 114m
argocd-redis-network-policy app.kubernetes.io/name=argocd-redis 114m
argocd-repo-server-network-policy app.kubernetes.io/name=argocd-repo-server 114m
argocd-server-network-policy app.kubernetes.io/name=argocd-server
아래와 같이 삭제한다.
[root@bastion argocd]# kubectl delete networkpolicy argocd-server-79bc95c4-kkhwv argocd-repo-server-network-policy argocd-redis-network-policy argocd-dex-server-network-policy -n argocd
networkpolicy.networking.k8s.io "argocd-repo-server-network-policy" deleted
networkpolicy.networking.k8s.io "argocd-redis-network-policy" deleted
networkpolicy.networking.k8s.io "argocd-dex-server-network-policy" deleted
deployment 전체 재기동은 아래와 같이 실행한다.
[root@bastion argocd]# kubectl rollout restart deployment -n argocd
deployment.apps/argocd-applicationset-controller restarted
deployment.apps/argocd-dex-server restarted
deployment.apps/argocd-notifications-controller restarted
deployment.apps/argocd-redis restarted
deployment.apps/argocd-repo-server restarted
deployment.apps/argocd-server restarted
초기 비밀번호 확인
[root@bastion argocd]# kubectl get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" -n argocd | base64 -d && echo
1jBqpaCukWy58RzT
웹브라우저에서 http://argocd-argocd.apps.okd4.ktdemo.duckdns.org 로 접속하고 admin 계정/초기비밀번호로 로그인 하고 비밀번호를 변경합니다.
계정 생성을 위해서는 argocd-server를 NodePort 로 expose하고 argocd cli 로 접속한다.
[root@bastion argocd]# kubectl get svc -n argocd
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-applicationset-controller ClusterIP 172.30.157.78 <none> 7000/TCP,8080/TCP 162m
argocd-dex-server ClusterIP 172.30.96.19 <none> 5556/TCP,5557/TCP,5558/TCP 162m
argocd-metrics ClusterIP 172.30.216.229 <none> 8082/TCP 162m
argocd-notifications-controller-metrics ClusterIP 172.30.155.71 <none> 9001/TCP 162m
argocd-redis ClusterIP 172.30.45.66 <none> 6379/TCP 162m
argocd-repo-server ClusterIP 172.30.178.138 <none> 8081/TCP,8084/TCP 162m
argocd-server NodePort 172.30.154.250 <none> 80:30866/TCP,443:31928/TCP 162m
argocd-server-metrics ClusterIP 172.30.55.105 <none> 8083/TCP 162m
[root@bastion argocd]# argocd login 192.168.1.247:30866
FATA[0000] dial tcp 192.168.1.247:30866: connect: connection refused
[root@bastion argocd]# argocd login 192.168.1.146:30866
WARNING: server is not configured with TLS. Proceed (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context '192.168.1.146:30866' updated
argocd-cm
configmap에 data 를 생성하고 계정을 아래와 같이 추가합니다.
[root@bastion argocd]# kubectl -n argocd edit configmap argocd-cm -o yaml
apiVersion: v1
data:
accounts.edu1: apiKey,login
accounts.edu2: apiKey,login
accounts.edu3: apiKey,login
accounts.edu4: apiKey,login
accounts.edu5: apiKey,login
accounts.haerin: apiKey,login
accounts.shclub: apiKey,login
exec.enabled: "true"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"argocd-cm","app.kubernetes.io/part-of":"argocd"},"name":"argocd-cm","namespace":"argocd"}}
creationTimestamp: "2023-08-22T06:32:36Z"
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
resourceVersion: "5057239"
uid: 2b5d0dbe-eb9c-41f7-bc26-e93355aa6e48
계정 별로 비밀번호를 생성합니다.
[root@bastion argocd]# argocd account update-password --account shclub
*** Enter password of currently logged in user (admin):
*** Enter new password for user shclub:
*** Confirm new password for user shclub:
ERRO[0011] Passwords do not match
*** Enter new password for user shclub:
*** Confirm new password for user shclub:
Password updated
argocd-rbac-cm
configmap에 data 를 생성하고 계정별로 권한을 추가합니다.
- exec는 terminal 을 사용할수 있는 권한 입니다.
[root@bastion argocd]# kubectl -n argocd edit configmap argocd-rbac-cm -o yaml
data:
policy.csv: |
p, role:manager, applications, *, */*, allow
p, role:manager, clusters, get, *, allow
p, role:manager, repositories, *, *, allow
p, role:manager, projects, *, *, allow
p, role:manager, exec, create, */*, allow
p, role:edu1, clusters, get, *, allow
p, role:edu1, repositories, get, *, allow
p, role:edu1, projects, get, *, allow
p, role:edu1, applications, *, edu1/*, allow
p, role:edu1, exec, create, edu1/*, allow
p, role:edu2, clusters, get, *, allow
p, role:edu2, repositories, get, *, allow
p, role:edu2, projects, get, *, allow
p, role:edu2, applications, *, edu2/*, allow
p, role:edu2, exec, create, edu2/*, allow
p, role:edu3, clusters, get, *, allow
p, role:edu3, repositories, get, *, allow
p, role:edu3, projects, get, *, allow
p, role:edu3, applications, *, edu3/*, allow
p, role:edu3, exec, create, edu3/*, allow
p, role:edu4, clusters, get, *, allow
p, role:edu4, repositories, get, *, allow
p, role:edu4, projects, get, *, allow
p, role:edu4, applications, *, edu4/*, allow
p, role:edu4, exec, create, edu4/*, allow
p, role:edu5, clusters, get, *, allow
p, role:edu5, repositories, get, *, allow
p, role:edu5, projects, get, *, allow
p, role:edu5, applications, *, edu5/*, allow
p, role:edu5, exec, create, edu5/*, allow
g, edu1, role:edu1
g, edu2, role:edu2
g, edu3, role:edu3
g, edu4, role:edu4
g, edu5, role:edu5
g, shclub, role:manager
g, haerin, role:manager
policy.default: role:''
minio 이름으로 namespace를 생성하고 uid-range
를 확인한다.
[root@bastion minio]# oc new-project minio
[root@bastion minio]# kubectl describe namespace minio
Name: minio
Labels: kubernetes.io/metadata.name=minio
Annotations: openshift.io/description:
openshift.io/display-name:
openshift.io/requester: system:admin
openshift.io/sa.scc.mcs: s0:c27,c4
openshift.io/sa.scc.supplemental-groups: 1000710000/10000
openshift.io/sa.scc.uid-range: 1000710000/10000
Status: Active
No resource quota.
No LimitRange resource.
minio namespace 에 권한을 할당한다.
[root@bastion minio]# oc adm policy add-scc-to-user anyuid system:serviceaccount:minio:default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "default"
[root@bastion harbor]# oc adm policy add-scc-to-user privileged system:serviceaccount:minio:default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "default"
helm 설정을 한다. 우리는 bitnami 에서 제공하는 chart를 사용한다.
[root@bastion minio]# helm repo add bitnami https://charts.bitnami.com/bitnami
[root@bastion minio]# helm search repo bitnami | grep minio
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
bitnami/minio 12.7.0 2023.7.18 MinIO(R) is an object storage server, compatibl...
minio 에서 사용한 pv 와 pvc를 생성한다. nfs는 synology nas 에서 생성하였고 과정은 skip.
[root@bastion minio]# vi minio_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: minio-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 100Gi
nfs:
path: /volume3/okd/minio
server: 192.168.1.79
persistentVolumeReclaimPolicy: Retain
pvc 생성
[root@bastion minio]# vi minio_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: minio-pv
Openshift 에 맞게 values를 수정하기 위해 values.yaml 로 다운 받는다.
[root@bastion minio]# helm show values bitnami/minio > values.yaml
values.yaml 화일에서 아래 부분을 수정한다.
100 rootUser: admin
101 ## @param auth.rootPassword Password for MinIO® root user
102 ##
103 rootPassword: "hahahaha" # 본인의 비밀번호 설정
...
377 ## config:
378 ## - name: region
379 ## options:
380 ## name: us-east-1
381 config:
382 - name: region
383 options:
384 name: ap-northeast-2 # S3 호환으로 사용하기 위해 region을 설정한다.
...
389 ##
390 podSecurityContext:
391 enabled: true
392 fsGroup: 1000710000 #1001 # namespace 의 uid range 값으로 변경한다.
...
399 containerSecurityContext:
400 enabled: true
401 runAsUser: 1000710000 #1001
...
426 podSecurityContext:
427 enabled: true
428 fsGroup: 1000710000 # 1001
...
434 ##
435 containerSecurityContext:
436 enabled: true
437 runAsUser: 1000710000 #1001
438 runAsNonRoot: true
...
899 persistence:
900 ## @param persistence.enabled Enable MinIO® data persistence using PVC. If false, use emptyDir
901 ##
902 enabled: true
909 ##
910 storageClass: ""
911 ## @param persistence.mountPath Data volume mount path
912 ##
913 mountPath: /data
914 ## @param persistence.accessModes PVC Access Modes for MinIO® data volume
915 ##
916 accessModes:
917 - ReadWriteOnce
918 ## @param persistence.size PVC Storage Request for MinIO® data volume
919 ##
920 size: 100Gi # 위에서 설정한 pvc 사이즈
925 ##
926 existingClaim: "minio-pvc" # 위에서 설정한 pvc 이름
이제 위에서 수정한 values.yaml 화일로 설치를 한다.
[root@bastion minio]# helm install my-minio -f values.yaml bitnami/minio -n minio
NAME: my-minio
LAST DEPLOYED: Wed Aug 23 10:04:29 2023
NAMESPACE: minio
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: minio
CHART VERSION: 12.7.0
APP VERSION: 2023.7.18
** Please be patient while the chart is being deployed **
MinIO® can be accessed via port on the following DNS name from within your cluster:
my-minio.minio.svc.cluster.local
To get your credentials run:
export ROOT_USER=$(kubectl get secret --namespace minio my-minio -o jsonpath="{.data.root-user}" | base64 -d)
export ROOT_PASSWORD=$(kubectl get secret --namespace minio my-minio -o jsonpath="{.data.root-password}" | base64 -d)
To connect to your MinIO® server using a client:
- Run a MinIO® Client pod and append the desired command (e.g. 'admin info'):
kubectl run --namespace minio my-minio-client \
--rm --tty -i --restart='Never' \
--env MINIO_SERVER_ROOT_USER=$ROOT_USER \
--env MINIO_SERVER_ROOT_PASSWORD=$ROOT_PASSWORD \
--env MINIO_SERVER_HOST=my-minio \
--image docker.io/bitnami/minio-client:2023.7.18-debian-11-r0 -- admin info minio
To access the MinIO® web UI:
- Get the MinIO® URL:
echo "MinIO® web URL: http://127.0.0.1:9001/minio"
kubectl port-forward --namespace minio svc/my-minio 9001:9001
외부에서 접속하기 위해서 route를 생성한다.
[root@bastion minio]# kubectl get svc -n minio
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-minio ClusterIP 172.30.88.228 <none> 9000/TCP,9001/TCP 84m
[root@bastion minio]# kubectl describe svc my-minio -n minio
Name: my-minio
Namespace: minio
Labels: app.kubernetes.io/instance=my-minio
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=minio
helm.sh/chart=minio-12.7.0
Annotations: meta.helm.sh/release-name: my-minio
meta.helm.sh/release-namespace: minio
Selector: app.kubernetes.io/instance=my-minio,app.kubernetes.io/name=minio
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.30.88.228
IPs: 172.30.88.228
Port: minio-api 9000/TCP
TargetPort: minio-api/TCP
Endpoints: 10.128.0.48:9000
Port: minio-console 9001/TCP
TargetPort: minio-console/TCP
Endpoints: 10.128.0.48:9001
Session Affinity: None
Events: <none>
minio-console
포트로 route와 서비스 포트 매핑이 필요하다.
[root@bastion minio]# vi minio_route.yaml
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app : minio
name: minio
spec:
host: minio-minio.apps.okd4.ktdemo.duckdns.org
port:
targetPort: minio-console
tls:
insecureEdgeTerminationPolicy: Allow
termination: edge
to:
kind: Service
name: my-minio
weight: 100
wildcardPolicy: None
웹브라우저에서 http://minio-minio.apps.okd4.ktdemo.duckdns.org 로 접속하고 admin 계정/values.yaml에 설정한 비밀번호로 로그인 한다.
admin으로 로그인을 한 후 Administrator -> Identity -> Users 메뉴에서 계정을 하나 생성하고 권한을 부여한다.
admin 계정은 로그 아웃 하고 신규 생성한 계정으로 로그인 하고 Object Browser 메뉴로 이동하면 bucket을 생성하라는 화면이 나온다.
harbor를 docker private registry 로 설치할 예정이고 스토리지는 minio를 사용할 예정이기 때문에 이름을 harbor-registry
라는 이름으로 할당하고 create bucket 버튼을 클릭하여 생성한다.
- versioning 만 체크 한다.
생성된 된 bucket은 다음 과 같다. read/write 권한이 할당 되어 있다.
bucket 이름을 클릭하고 access 를 private 으로 설정한다.
위에서 생성한 bucket 에 접근하기 위한 access key를 생성한다.
create acces key 버튼 클릭.
create 버튼 을 누르면 Access key 와 secret 이 생성이 되고 Download 버튼을 눌러 키를 다운 받을 수 있다.
생성된 key 를 확인 할 수 있다.
참고
PV/PVC 를 자동으로 생성하기 위해서 Dynamic Provisioning 을 사용한다.
nfs-subdir-external-provisioner
를 사용하기로 한다.
helm repository를 추가한다.
[root@bastion dynamic_provisioning]# helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
"nfs-subdir-external-provisioner" has been added to your repositories
[root@bastion dynamic_provisioning]# helm repo list
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
NAME URL
bitnami https://charts.bitnami.com/bitnami
harbor https://helm.goharbor.io
nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
chart를 검색한다.
[root@bastion dynamic_provisioning]# helm search repo nfs-subdir-external-provisioner
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
NAME CHART VERSION APP VERSION DESCRIPTION
nfs-subdir-external-provisioner/nfs-subdir-exte... 4.0.18 4.0.2 nfs-subdir-external-provisioner is an automatic...
values 를 변경하기 위해 values.yaml 를 다운 받는다.
[root@bastion dynamic_provisioning]# helm show values nfs-subdir-external-provisioner/nfs-subdir-external-provisioner > values.yaml
values 에서는 아래와 같이 변경한다.
1 replicaCount: 1
2 strategyType: Recreate
3
4 image:
5 repository: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner
6 tag: v4.0.2
7 pullPolicy: IfNotPresent
8 imagePullSecrets: []
9
10 nfs:
11 server: 192.168.1.79 # NAS IP
12 path: /volume3/okd/dynamic # path
13 mountOptions:
14 volumeName: nfs-subdir-external-provisioner-root
15 # Reclaim policy for the main nfs volume
16 reclaimPolicy: Retain
17
18 # For creating the StorageClass automatically:
19 storageClass:
20 create: true
21
22 # Set a provisioner name. If unset, a name will be generated.
23 # provisionerName:
24
25 # Set StorageClass as the default StorageClass
26 # Ignored if storageClass.create is false
27 defaultClass: false
28
29 # Set a StorageClass name
30 # Ignored if storageClass.create is false
31 name: nfs-client # storage class 이름
32
33 # Allow volume to be expanded dynamically
34 allowVolumeExpansion: true
35
36 # Method used to reclaim an obsoleted volume
37 reclaimPolicy: Delete
38
39 # When set to false your PVs will not be archived by the provisioner upon deletion of the PVC.
40 archiveOnDelete: false # archive 안함
41
42 # If it exists and has 'delete' value, delete the directory. If it exists and has 'retain' value, save the directory.
43 # Overrides archiveOnDelete.
44 # Ignored if value not set.
...
49 pathPattern:
50
51 # Set access mode - ReadWriteOnce, ReadOnlyMany or ReadWriteMany
52 accessModes: ReadWriteOnce
53
54 # Set volume bindinng mode - Immediate or WaitForFirstConsumer
55 volumeBindingMode: Immediate
56
57 # Storage class annotations
58 annotations: {}
59
60 leaderElection:
61 # When set to false leader election will be disabled
62 enabled: true
63
64 ## For RBAC support:
65 rbac:
66 # Specifies whether RBAC resources should be created
67 create: true
68
...
71 podSecurityPolicy:
72 enabled: true # okd는 설정. 아주 중요
73
74 # Deployment pod annotations
75 podAnnotations: {}
76
80 #podSecurityContext: {}
81
82 podSecurityContext:
83 fsGroup: 1000670000 #1001 : namespace 의 UID
이제 helm 으로 설치를 한다.
[root@bastion dynamic_provisioning]# helm install nfs-subdir-external-provisioner -f values.yaml nfs-subdir-external-provisioner/nfs-subdir-external-provisioner -n shclub
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
NAME: nfs-subdir-external-provisioner
LAST DEPLOYED: Wed Aug 23 15:25:10 2023
NAMESPACE: shclub
STATUS: deployed
REVISION: 1
TEST SUITE: None
storageclass 가 nfs-client 이름으로 생성이 된 것을 확인 할 수 있다.
[root@bastion dynamic_provisioning]# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/nfs-subdir-external-provisioner Delete Immediate true 12s
PVC 를 생성하여 테스트 해본다.
[root@bastion dynamic_provisioning]# vi test_claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-test
spec:
storageClassName: nfs-client # SAME NAME AS THE STORAGECLASS
accessModes:
- ReadWriteMany # must be the same as PersistentVolume
resources:
requests:
storage: 1Gi
정상적으로 PVC 가 자동 생성 된 것을 확인 할 수 있다.
[root@bastion dynamic_provisioning]# kubectl get pvc -n shclub
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc-test Bound pvc-3b872a76-4fe5-4a6d-9c19-9ebdb4c4e58f 1Gi RWX nfs-client 6m36s
bastion 서버에 harbor 폴더를 생성하고 okd에 harbor namespace를 생성한다.
[root@bastion harbor]# oc new-project harbor
Now using project "harbor" on server "https://api.okd4.ktdemo.duckdns.org:6443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app rails-postgresql-example
to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname
해당 namespace 에 권한을 부여한다.
[root@bastion harbor]# oc adm policy add-scc-to-user anyuid system:serviceaccount:harbor:default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "default"
[root@bastion harbor]# oc adm policy add-scc-to-user privileged system:serviceaccount:harbor:default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "default"
harbor repository 를 추가한다. bitnami 를 사용 시 불 필요.
아래 과정은 harbor를 사용 하는 예제이다.
[root@bastion harbor]# helm repo add harbor https://helm.goharbor.io
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
"harbor" has been added to your repositories
[root@bastion harbor]# helm repo list
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
NAME URL
bitnami https://charts.bitnami.com/bitnami
harbor https://helm.goharbor.io
[root@bastion harbor]# helm search repo harbor
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/harbor 16.7.4 2.8.3 Harbor is an open source trusted cloud-native r...
harbor/harbor 1.12.4 2.8.4 An open source trusted cloud native registry th...
Openshift 에 맞게 values를 수정하기 위해 harbor_values.yaml 로 다운 받는다.
[root@bastion harbor]# helm show values harbor/harbor > harbor_values.yaml
values.yaml 화일에서 아래 부분을 수정한다.
1 expose:
2
4 type: ingress # ingress 로 설정
5 tls:
6 # Enable TLS or not.
7 # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"
8 # Note: if the "expose.type" is "ingress" and TLS is disabled,
9 # the port must be included in the command when pulling/pushing images.
10 # Refer to https://github.com/goharbor/harbor/issues/5291 for details.
11 enabled: true # true 로 설정해야 route/ingress에서 tls 설정이 생성됨. 아주 중요
...
34 ingress:
35 hosts:
36 core: myharbor.apps.okd4.ktdemo.duckdns.org # 나의 URL 로 설정
37 notary: notray-harbor.apps.okd4.ktdemo.duckdns.org #notary.harbor.domain
...
126 # If Harbor is deployed behind the proxy, set it as the URL of proxy
127 externalURL: https://myharbor.apps.okd4.ktdemo.duckdns.org # url 이 정상적으로 설정되어야 docker push가 되고 push command가 정상적으로 나옴
...
203 resourcePolicy: "keep" # keep이면 helm 지워도 pvc 삭제 안됨
204 persistentVolumeClaim:
205 registry:
208 existingClaim: ""
...
212 storageClass: "nfs-client" # storage class 설정
213 subPath: ""
214 accessMode: ReadWriteOnce
215 size: 5Gi
216 annotations: {}
217 jobservice:
218 jobLog:
219 existingClaim: ""
220 storageClass: "nfs-client" # storage class 설정
221 subPath: ""
222 accessMode: ReadWriteOnce
223 size: 1Gi
224 annotations: {}
225 # If external database is used, the following settings for database will
226 # be ignored
227 database:
228 existingClaim: ""
229 storageClass: "nfs-client" # storage class 설정
230 subPath: ""
231 accessMode: ReadWriteOnce
232 size: 1Gi
233 annotations: {}
234 # If external Redis is used, the following settings for Redis will
...
236 redis:
237 existingClaim: ""
238 storageClass: "nfs-client" # storage class 설정
239 subPath: ""
240 accessMode: ReadWriteOnce
241 size: 1Gi
242 annotations: {}
243 trivy:
244 existingClaim: ""
245 storageClass: "nfs-client" # storage class 설정
246 subPath: ""
247 accessMode: ReadWriteOnce
248 size: 5Gi
249 annotations: {}
...
267 # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
268 # "oss" and fill the information needed in the corresponding section. The type
269 # must be "filesystem" if you want to use persistent volumes for registry
270 type: s3 # 스토로지 타입을 선택한다. 우리는 minio object storage를 사용하기 때문에 s3로 설정
...
290 s3:
291 # Set an existing secret for S3 accesskey and secretkey
292 # keys in the secret should be REGISTRY_STORAGE_S3_ACCESSKEY and REGISTRY_STORAGE_S3_SECRETKEY for registry
293 region: ap-northeast-2 # minio 에 설정한 region
294 bucket: harbor-registry
295 accesskey: "xs2Bg88****"
296 secretkey: "7A2HdVf*******K"
297 regionendpoint: "http://my-minio.minio.svc.cluster.local:9000" # k8s 서비스 url
298 #existingSecret: ""
299 #region: us-west-1
300 #bucket: bucketname
301 #accesskey: awsaccesskey
302 #secretkey: awssecretkey
303 #regionendpoint: http://myobjects.local
304 #encrypt: false
305 #keyid: mykeyid
306 #secure: true
307 #skipverify: false
308 #v4auth: true
309 #chunksize: "5242880"
310 #rootdirectory: /s3/object/name/prefix
311 #storageclass: STANDARD
312 #multipartcopychunksize: "33554432"
313 #multipartcopymaxconcurrency: 100
314 #multipartcopythresholdsize: "33554432"
이제 위에서 수정한 harbor_values.yaml 화일로 설치를 한다.
[root@bastion harbor]# vi harbor_values.yaml
[root@bastion harbor]# helm install my-harbor -f harbor_values.yaml harbor/harbor -n harbor
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
NAME: my-harbor
LAST DEPLOYED: Thu Aug 24 15:02:37 2023
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://core.harbor.domain
For more details, please visit https://github.com/goharbor/harbor
service 와 pod 그리고 pvc 생성을 확인한다.
[root@bastion harbor]# helm install my-harbor -f harbor_values.yaml harbor/harbor -n harbor
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/okd4/auth/kubeconfig
NAME: my-harbor
LAST DEPLOYED: Thu Aug 24 09:07:21 2023
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://core.harbor.domain
For more details, please visit https://github.com/goharbor/harbor
[root@bastion harbor]# kubectl get svc -n harbor
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
harbor ClusterIP 172.30.232.108 <none> 80/TCP,443/TCP,4443/TCP 9s
my-harbor-core ClusterIP 172.30.73.246 <none> 80/TCP 10s
my-harbor-database ClusterIP 172.30.134.241 <none> 5432/TCP 9s
my-harbor-jobservice ClusterIP 172.30.149.250 <none> 80/TCP 9s
my-harbor-notary-server ClusterIP 172.30.112.41 <none> 4443/TCP 9s
my-harbor-notary-signer ClusterIP 172.30.46.231 <none> 7899/TCP 9s
my-harbor-portal ClusterIP 172.30.77.199 <none> 80/TCP 9s
my-harbor-redis ClusterIP 172.30.253.146 <none> 6379/TCP 9s
my-harbor-registry ClusterIP 172.30.83.190 <none> 5000/TCP,8080/TCP 9s
my-harbor-trivy ClusterIP 172.30.76.51 <none> 8080/TCP 9s
[root@bastion harbor]# kubectl get po -n harbor
NAME READY STATUS RESTARTS AGE
my-harbor-core-67587bcbc4-x6whs 1/1 Running 0 6m35s
my-harbor-database-0 1/1 Running 0 6m34s
my-harbor-jobservice-599859fd57-4qvkk 1/1 Running 2 (6m19s ago) 6m35s
my-harbor-notary-server-bf5f9b94f-pmxd4 1/1 Running 0 6m35s
my-harbor-notary-signer-55db47f788-srzq8 1/1 Running 0 6m34s
my-harbor-portal-694bf8c545-p56f6 1/1 Running 0 6m34s
my-harbor-redis-0 1/1 Running 0 6m34s
my-harbor-registry-6c57986d5d-84fzb 2/2 Running 0 6m34s
my-harbor-trivy-0 1/1 Running 0 6m34s
[root@bastion harbor]# kubectl get pvc -n harbor
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-my-harbor-redis-0 Bound pvc-8c45a518-871e-4c4c-bd15-e91243ba5ade 1Gi RWO nfs-client 12m
data-my-harbor-trivy-0 Bound pvc-88724475-62b8-44d7-93c6-70ec4f2b48f8 5Gi RWO nfs-client 12m
database-data-my-harbor-database-0 Bound pvc-d7c7d316-7d28-4258-b326-925e25c8bb68 1Gi RWO nfs-client 12m
my-harbor-jobservice Bound pvc-e0a9fcf7-d360-4c56-9a4d-0500572cbfc1 1Gi RWO nfs-client 12m
my-harbor-registry Bound pvc-7a0cb3da-87ba-4758-ba4d-aaa78baec07c 5Gi RWO nfs-client 12m
ingress 와 route 도 확인해 본다.
[root@bastion harbor]# kubectl get route -n harbor
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
harbor harbor-harbor.apps.okd4.ktdemo.duckdns.org my-harbor-portal http-web edge/Allow None
my-harbor-ingress-2cggf myharbor.apps.okd4.ktdemo.duckdns.org /service/ my-harbor-core http-web edge/Redirect None
my-harbor-ingress-5xdrs myharbor.apps.okd4.ktdemo.duckdns.org /api/ my-harbor-core http-web edge/Redirect None
my-harbor-ingress-bxkmn myharbor.apps.okd4.ktdemo.duckdns.org /chartrepo/ my-harbor-core http-web edge/Redirect None
my-harbor-ingress-hqspm myharbor.apps.okd4.ktdemo.duckdns.org /v2/ my-harbor-core http-web edge/Redirect None
my-harbor-ingress-jx9x7 myharbor.apps.okd4.ktdemo.duckdns.org /c/ my-harbor-core http-web edge/Redirect None
my-harbor-ingress-notary-6j87b notray-harbor.apps.okd4.ktdemo.duckdns.org / my-harbor-notary-server <all> edge/Redirect None
my-harbor-ingress-tsr84 myharbor.apps.okd4.ktdemo.duckdns.org / my-harbor-portal <all> edge/Redirect None
[root@bastion harbor]# kubectl get ing -n harbor
NAME CLASS HOSTS ADDRESS PORTS AGE
my-harbor-ingress <none> myharbor.apps.okd4.ktdemo.duckdns.org router-default.apps.okd4.ktdemo.duckdns.org 80, 443 10m
my-harbor-ingress-notary <none> notray-harbor.apps.okd4.ktdemo.duckdns.org router-default.apps.okd4.ktdemo.duckdns.org 80, 443 10m
포탈에 접속하기 위해서 portal 서비스의 host를 찾는다.
웹브라우저에서 https://myharbor.apps.okd4.ktdemo.duckdns.org 로 접속하고 admin 계정과 초기 비빌번호인 Harbor12345
로 로그인 하고 변경한다.
설정 참고 : https://github.com/shclub/edu_homework/blob/master/homework.md
계정을 생성을 하고 나서 project 로 이동하여 repository를 생성하고 public 에 체크한다.
생성한 repository 에서 tag / push command 를 확인한다.
생성한 repository 에서 tag / push command 를 확인한다.
command 로 image를 push 하기 위해서는 insecure registry를 설정 해야 하는데 Mac에서는 도커 데스크의 preferences 로 이동한 후 도커 엔진에서 harbor url을 입력하고 docker 를 재기동한다.
docker 를 재기동한다.
linux 기준
docker 의 경우 /etc/docker/daemon.json 화일에 아래와 같이 작성하고 도커를 재시작한다.
{
"insecure-registries" : ["myharbor.apps.okd4.ktdemo.duckdns.org"]
}
반드시 재기동한다.
# flush changes
sudo systemctl daemon-reload
# restart docker
sudo systemctl restart docker
podman 의 경우는 /etc/containers/registries.conf.d
로 이동하여 myregistry.conf
라는 이름으로 화일을 하나 생성한다.
[root@bastion containers]# cd /etc/containers/registries.conf.d
[root@bastion registries.conf.d]# vi myregistry.conf
location
에 harbor 주소를 적어 주고 insecure
옵션은 true
로 설정한다.
[[registry]]
location = "myharbor.apps.okd4.ktdemo.duckdns.org"
insecure = true
podman의 경우 docker 처럼 재기동 필요가 없다.
현재 다운 되어 있는 docker image를 조회해 본다.
[root@bastion registries.conf.d]# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/shclub/edu1 master 26efc9f33ac0 24 hours ago 166 MB
로그인을 하고 tagging 을 하고 push 를 해봅니다.
- 중간에 edu 는 harbor 의 project ( repository ) 이름
[root@bastion registries.conf.d]# podman login myharbor.apps.okd4.ktdemo.duckdns.org
Username: shclub
Password:
Login Succeeded!
[root@bastion registries.conf.d]# podman tag docker.io/shclub/edu1 myharbor.apps.okd4.ktdemo.duckdns.org/edu/shclub/edu1
[root@bastion registries.conf.d]# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/shclub/edu1 master 26efc9f33ac0 24 hours ago 166 MB
myharbor.apps.okd4.ktdemo.duckdns.org/edu/shclub/edu1 latest 26efc9f33ac0 24 hours ago 166 MB
[root@bastion registries.conf.d]# podman push myharbor.apps.okd4.ktdemo.duckdns.org/edu/shclub/edu1
Getting image source signatures
Copying blob bc9e2ff0c9d4 done
Copying blob 870fb40b1d5c done
Copying blob 690a4af2c9f7 done
Copying blob 511780f88f80 done
Copying blob 0048c3fcf4cb done
Copying blob 5b60283f3630 done
Copying blob 93b4d35bad0a done
Copying blob 6c2a89cbf5af done
Copying blob d404f6c08f8b done
Copying config 26efc9f33a done
Writing manifest to image destination
Storing signatures
정상적으로 push 가 되었고 harbor에서 확인해 봅니다.
우리는 repository 를 minio에 설정 했기 때문에 minio 에서도 확인합니다.
Object browser 에 보면 용량을 확인 할 수 있다
harbor-registry
를 클릭해서 들어가면 harbor-registry/docker/registry/v2
이경로로 이동하여 repository 항목을 클릭하면 repository 들을 볼 수 있다.
meta 데이터는 도커이미지 이름의 경로인 harbor-registry/docker/registry/v2/repositories/edu/shclub/edu1
로 이동하여 보면 폴더 3개가 보인다.
데이터는 harbor-registry/docker/registry/v2/blobs/sha256
이경로로 이동하여 보면 숫자 폴더가 나오고 데이터가 저장된 것을 볼 수 있다.
bastion 서버에서 haproxy.cfg 에 신규 노드를 추가한다.
/etc/haproxy/haproxy.cfg 를 아래와 같이 수정한다.
backend openshift_api_backend
mode tcp
balance source
server bootstrap 192.168.1.128:6443 check # bootstrap 서버
server okd-1 192.168.1.146:6443 check # okd master/worker 설정
server okd-2 192.168.1.148:6443 check # worker node 추가
# OKD Machine Config Server
frontend okd_machine_config_server_frontend
mode tcp
bind *:22623
default_backend okd_machine_config_server_backend
backend okd_machine_config_server_backend
mode tcp
balance source
server bootstrap 192.168.1.128:22623 check # bootstrap 서버
server okd-1 192.168.1.146:22623 check # okd master/worker 설정
server okd-2 192.168.1.148:22623 check # worker node 추가
# OKD Ingress - layer 4 tcp mode for each. Ingress Controller will handle layer 7.
frontend okd_http_ingress_frontend
bind *:80
default_backend okd_http_ingress_backend
mode tcp
backend okd_http_ingress_backend
balance source
mode tcp
server okd-1 192.168.1.146:80 check # okd master/worker 설정
server okd-2 192.168.1.148:80 check # worker node 추가
frontend okd_https_ingress_frontend
bind *:443
default_backend okd_https_ingress_backend
mode tcp
backend okd_https_ingress_backend
mode tcp
balance source
server okd-1 192.168.1.146:443 check
server okd-2 192.168.1.148:443 check # worker node 추가
haproxy를 재기동 한다.
[root@bastion shclub]# systemctl restart haproxy
/var/named 폴더로 이동한다.
[root@bastion shclub]# cd /var/named
okd4.ktdemo.duckdns.org.zone 파일 설정 ( DNS 정방향 )
- ip와 hostname을 잘 수정한다.
[root@bastion named]# ls
data dynamic named.ca named.empty named.localhost named.loopback slaves
[root@bastion named]# vi okd4.ktdemo.duckdns.org.zone
$TTL 1D
@ IN SOA @ ns.ktdemo.duckdns.org. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS ns.ktdemo.duckdns.org.
@ IN A 192.168.1.247 ;
; Ancillary services
lb.okd4 IN A 192.168.1.247
; Bastion or Jumphost
ns IN A 192.168.1.247 ;
; OKD Cluster
bastion.okd4 IN A 192.168.1.247
bootstrap.okd4 IN A 192.168.1.128
okd-1.okd4 IN A 192.168.1.146
okd-2.okd4 IN A 192.168.1.148
api.okd4 IN A 192.168.1.247
api-int.okd4 IN A 192.168.1.247
*.apps.okd4 IN A 192.168.1.247
1.168.192.in-addr.rev 파일 설정 ( DNS 역방향 )
[root@bastion named]# vi 1.168.192.in-addr.rev
$TTL 1D
@ IN SOA ktdemo.duckdns.org. ns.ktdemo.duckdns.org. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS ns.
247 IN PTR ns.
247 IN PTR bastion.okd4.ktdemo.duckdns.org.
128 IN PTR bootstrap.okd4.ktdemo.duckdns.org.
146 IN PTR okd-1.okd4.ktdemo.duckdns.org.
148 IN PTR okd-1.okd4.ktdemo.duckdns.org.
247 IN PTR api.okd4.ktdemo.duckdns.org.
247 IN PTR api-int.okd4.ktdemo.duckdns.org.
zone 파일 권한 설정을 하고 named 서비스를 재기동한다.
[root@bastion named]# systemctl restart named
proxmox 에 coreos 기반의 worker 노드를 생성한다. ( 생성 과정은 생략 )
- 다운로드 위치 : https://builds.coreos.fedoraproject.org/browser?stream=stable&arch=x86_64 에서
- Version : fedora-coreos-35.20220410.3.1-live.x86_64.iso
먼저 네트웍을 설정을 하기 위해서 network device 이름을 확인한다.
[root@localhost core]# nmcli device
DEVICE TYPE STATE CONNECTION
ens18 ethernet connected Wired connection 1
lo loopback unmanaged --
connection 이름을 ens18으로 생성한다.
[root@localhost core]# nmcli connection add type ethernet autoconnect yes con-name ens18 ifname ens18
Connection 'ens18' (c8971315-71e5-40a1-8b16-9c1ef5b354c8) successfully added.
네트웍 설정을 한다.
- ip : okd-2 서버는 192.168.1.148/24 로 설정한다.
- dns : bastion 서버는 192.168.1.247 로 설정한다.
- gateway : 공유기 ip 인 192.168.1.1 로 설정한다. ( bastion 서버 ip로 해도 상관 없음 )
- dns-search : okd4.ktdemo.duckdns.org 로 설정 ( cluster 이름 + . + base Domain)
[root@localhost core]# nmcli connection modify ens18 ipv4.addresses 192.168.1.148/24 ipv4.method manual
[root@localhost core]# nmcli connection modify ens18 ipv4.dns 192.168.1.247
[root@localhost core]# nmcli connection modify ens18 ipv4.gateway 192.168.1.1
[root@localhost core]# nmcli connection modify ens18 ipv4.dns-search okd4.ktdemo.duckdns.org
worker 노드 ( okd-2 ) 설치를 한다.
[root@localhost core]# coreos-installer install /dev/sda -I http://192.168.1.247:8080/ign/worker.ign --insecure-ignition --copy-network
Installing Fedora CoreOS 35.20220410.3.1 x86_64 (512-byte sectors)
> Read disk 2.5 GiB/2.5 GiB (100%)
Writing Ignition config
Copying networking configuration from /etc/NetworkManager/system-connections/
Copying /etc/NetworkManager/system-connections/ens18.nmconnection to installed system
Install complete.
hostname을 설정 하고 재기동 한다.
[root@localhost core]# hostnamectl set-hostname okd-2.okd4.ktdemo.duckdns.org
[root@localhost core]# reboot now
bastion 서버에서 아래 명령어로 모니터링을 하고 It is now safe to remove the bootstrap resources
가 나오면 정상적으로 worker 노드가 설치가 완료 됩니다.
[root@bastion ~]# /usr/local/bin/openshift-install --dir=/root/okd4 wait-for bootstrap-complete --log-level=debug
DEBUG OpenShift Installer 4.10.0-0.okd-2022-03-07-131213
DEBUG Built from commit 3b701903d96b6375f6c3852a02b4b70fea01d694
INFO Waiting up to 20m0s (until 11:21AM) for the Kubernetes API at https://api.okd4.ktdemo.duckdns.org:6443...
INFO API v1.23.3-2003+e419edff267ffa-dirty up
INFO Waiting up to 30m0s (until 11:31AM) for bootstrapping to complete...
DEBUG Bootstrap status: complete
INFO It is now safe to remove the bootstrap resources
DEBUG Time elapsed per stage:
DEBUG Bootstrap Complete: 1s
DEBUG API: 1s
INFO Time elapsed: 1s
worker node 가 재기동 하기에는 시간이 많이 소요가 되고 완료가 되면 bastion 서버에서 csr를 확인하고 approve를 해야 join이 완료가 된다.
먼저 node를 확인해 본다 아직 worker node 가 조인 되지 않았다.
[root@bastion ~]# oc get nodes
NAME STATUS ROLES AGE VERSION
okd-1.okd4.ktdemo.duckdns.org Ready master,worker 17d v1.23.3+759c22b
csr를 조회하고 승인을 한다.
[root@bastion ~]# oc get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-5lgxl 17s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending
csr-vhvjn 3m28s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none>
[root@bastion ~]# oc adm certificate approve csr-5lgxl
certificatesigningrequest.certificates.k8s.io/csr-5lgxl approved
[root@bastion ~]# oc adm certificate approve csr-vhvjn
certificatesigningrequest.certificates.k8s.io/csr-vhvjn approved
다시 node를 조회하면 join 된 것을 확인 할수 있고 status 는 NotReady
이다.
[root@bastion ~]# oc get nodes
NAME STATUS ROLES AGE VERSION
okd-1.okd4.ktdemo.duckdns.org Ready master,worker 17d v1.23.3+759c22b
okd-2.okd4.ktdemo.duckdns.org NotReady worker 21s v1.23.3+759c22b
다시 csr를 조회해 보면 승인 대기 중인 csr 이 있는데 okd 는 machine config 기능이 있어 master node 설정값이 worker node에 적용되는 시간이 필요하다.
[root@bastion ~]# oc get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-5lgxl 2m29s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Approved,Issued
csr-j2m7b 17s kubernetes.io/kubelet-serving system:node:okd-2.okd4.ktdemo.duckdns.org <none> Pending
csr-vhvjn 5m40s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none>
시간이 좀 더 지나면 아래와 같이 Ready로 바뀐 것을 확인 할 수 있다.
[root@bastion ~]# oc get nodes
NAME STATUS ROLES AGE VERSION
okd-1.okd4.ktdemo.duckdns.org Ready master,worker 17d v1.23.3+759c22b
okd-2.okd4.ktdemo.duckdns.org Ready worker 5m10s v1.23.3+759c22b
machine config 값을 조회해 본다.
[root@bastion ~]# oc get mc
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
00-master 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
00-worker 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
01-master-container-runtime 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
01-master-kubelet 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
01-worker-container-runtime 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
01-worker-kubelet 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
99-master-generated-registries 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
99-master-okd-extensions 3.2.0 17d
99-master-ssh 3.2.0 17d
99-okd-master-disable-mitigations 3.2.0 17d
99-okd-worker-disable-mitigations 3.2.0 17d
99-worker-generated-registries 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
99-worker-okd-extensions 3.2.0 17d
99-worker-ssh 3.2.0 17d
rendered-master-f14cbe675b651ca064299b536d4cf820 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
rendered-worker-f9ec036401f7136f50343298d7955ab9 14a1ca2cb91ff7e0faf9146b21ba12cd6c652d22 3.2.0 17d
UPDATING 가 false
이면 update 완료
[root@bastion ~]# oc get mcp
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-f14cbe675b651ca064299b536d4cf820 True False False 1 1 1 0 17d
worker rendered-worker-f9ec036401f7136f50343298d7955ab9 True False False 1 1 1 0
기존에 master/worker 겸용 노드에서 생성된 namespace 중 일부는 신규 worker node 로 재기동 시키기 위해서 node selector를 설정한다.
okd-1 , okd-2 node 를 edit 하여 label 을 설정한다.
- okd-1 :
devops: "true"
- okd-2 :
edu: "true"
[root@bastion ~]# kubectl edit node okd-1.okd4.ktdemo.duckdns.org
namespace 에 node selector 를 적용한다.
openshift.io/node-selector: edu=true
[root@bastion ~]# kubectl edit namespace edu3
Pod를 하나 생성해 본다. okd-2 에 pod가 생성된 것을 확인 할 수 있다.
[root@bastion ~]# kubectl run nginx --image=nginx -n edu3
pod/nginx created
[root@bastion ~]# kubectl get po -n edu3 -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 20s 10.129.0.10 okd-2.okd4.ktdemo.duckdns.org <none> <none>
참고 자료
- 소개 : https://velog.io/@_gyullbb/series/OKD
- OKD 설치 : https://www.server-world.info/en/note?os=CentOS_Stream_8&p=okd4&f=1
- Openshfit 설치 ( Main) : https://hkjeon2.tistory.com/104
- OKD 설치 (Sub) : https://www.okd.io/guides/upi-sno/ #post-install
- Openshfit 설치 (Baremetal) : https://gruuuuu.github.io/ocp/ocp4.7-restricted/
- 인증 : https://gruuuuu.github.io/ocp/ocp4-authentication/
- Openshfit 미러 사이트 : https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.8/latest/
- HA Proxy : https://hoing.io/archives/2196
- DNS 서버 구축 : https://it-serial.tistory.com/entry/Linux-CentOS-7-DNS-%EC%84%9C%EB%B2%84-%EA%B5%AC%EC%B6%95-%EB%8F%84%EB%A9%94%EC%9D%B8-%EC%84%A4%EC%A0%95
- Proxmox 내부에 가상 사설망 구축: https://hwanstory.kr/@kim-hwan/posts/Proxmox-Virtual-Private-Network-Configuration
- pull secret 설정 : https://www.ibm.com/docs/ko/mas-cd/continuous-delivery?topic=platform-setting-up-bastion-host
- 오픈쉬프트 사용법 : https://sysdocu.tistory.com/1765 , https://sysdocu.tistory.com/1774
- OKD 아키텍처 : https://daaa0555.tistory.com/479
- ArgoCD : https://www.skyer9.pe.kr/wordpress/?p=6845
- ArgoCD Redis 접속 에러 (networkpolicy) : https://www.xiexianbin.cn/cicd/argo-cd/deploy/index.html
- Harbor 설치 : https://computingforgeeks.com/install-harbor-image-registry-on-kubernetes-openshift-with-helm-chart/?amp
journalctl -b -f -u crio.service
journalctl -xeu kubelet -f
route 정보 보기
[root@okd-1 core]# iptables-save | grep 443
-A KUBE-SERVICES -d 172.30.176.17/32 -p tcp -m comment --comment "openshift-config-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.80.69/32 -p tcp -m comment --comment "openshift-kube-controller-manager-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.17.22/32 -p tcp -m comment --comment "openshift-cluster-storage-operator/csi-snapshot-webhook:webhook has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.47.60/32 -p tcp -m comment --comment "openshift-service-ca-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.157.102/32 -p tcp -m comment --comment "openshift-authentication/oauth-openshift:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.4.82/32 -p tcp -m comment --comment "openshift-oauth-apiserver/api:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.61.212/32 -p tcp -m comment --comment "openshift-console/console:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.189.3/32 -p tcp -m comment --comment "openshift-operator-lifecycle-manager/catalog-operator-metrics:https-metrics has no endpoints" -m tcp --dport 8443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.92.217/32 -p tcp -m comment --comment "openshift-machine-api/cluster-baremetal-operator-service:https has no endpoints" -m tcp --dport 8443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.148.240/32 -p tcp -m comment --comment "openshift-operator-lifecycle-manager/packageserver-service:5443 has no endpoints" -m tcp --dport 5443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.253.148/32 -p tcp -m comment --comment "openshift-kube-storage-version-migrator-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.85.240/32 -p tcp -m comment --comment "openshift-authentication-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.158.160/32 -p tcp -m comment --comment "openshift-console-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.179.123/32 -p tcp -m comment --comment "openshift-cloud-credential-operator/cco-metrics:metrics has no endpoints" -m tcp --dport 8443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.27.215/32 -p tcp -m comment --comment "openshift-monitoring/prometheus-adapter:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.19.233/32 -p tcp -m comment --comment "openshift-cluster-storage-operator/csi-snapshot-controller-operator-metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.160.67/32 -p tcp -m comment --comment "openshift-multus/multus-admission-controller:webhook has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.20.56/32 -p tcp -m comment --comment "openshift-cluster-storage-operator/cluster-storage-operator-metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.139.147/32 -p tcp -m comment --comment "openshift-apiserver/api:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.131.213/32 -p tcp -m comment --comment "openshift-insights/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.231.206/32 -p tcp -m comment --comment "openshift-machine-api/cluster-autoscaler-operator:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.129.139/32 -p tcp -m comment --comment "openshift-kube-apiserver-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.254.166/32 -p tcp -m comment --comment "openshift-machine-api/machine-api-operator:https has no endpoints" -m tcp --dport 8443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.33.145/32 -p tcp -m comment --comment "openshift-controller-manager/controller-manager:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.78.19/32 -p tcp -m comment --comment "openshift-machine-api/machine-api-operator-webhook:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.20.240/32 -p tcp -m comment --comment "openshift-etcd-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.173.83/32 -p tcp -m comment --comment "openshift-kube-scheduler-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.70.103/32 -p tcp -m comment --comment "openshift-controller-manager-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.160.67/32 -p tcp -m comment --comment "openshift-multus/multus-admission-controller:metrics has no endpoints" -m tcp --dport 8443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.211.88/32 -p tcp -m comment --comment "openshift-operator-lifecycle-manager/olm-operator-metrics:https-metrics has no endpoints" -m tcp --dport 8443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.167.51/32 -p tcp -m comment --comment "openshift-machine-api/cluster-baremetal-webhook-service has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 172.30.0.6/32 -p tcp -m comment --comment "openshift-apiserver-operator/metrics:https has no endpoints" -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SEP-O36DLNQIDMAMWMZX -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.1.146:6443
-A KUBE-SEP-OJQBONVVMSWJJ73L -p tcp -m comment --comment "harbor/my-harbor-notary-server" -m tcp -j DNAT --to-destination 10.129.0.23:4443
-A KUBE-SEP-QQSOREN4FLMPNLUL -p tcp -m comment --comment "openshift-kube-apiserver/apiserver:https" -m tcp -j DNAT --to-destination 192.168.1.146:6443
-A KUBE-SEP-YO6BHA4K2YLFIUW6 -p tcp -m comment --comment "openshift-ingress/router-internal-default:https" -m tcp -j DNAT --to-destination 192.168.1.146:443
-A KUBE-SERVICES -d 172.30.254.208/32 -p tcp -m comment --comment "harbor/my-harbor-notary-server cluster IP" -m tcp --dport 4443 -j KUBE-SVC-R5E3RRBI3OXL227A
-A KUBE-SERVICES -d 172.30.239.126/32 -p tcp -m comment --comment "openshift-kube-scheduler/scheduler:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-OGQPOTBHHZMRDA43
-A KUBE-SERVICES -d 172.30.100.183/32 -p tcp -m comment --comment "openshift-kube-controller-manager/kube-controller-manager:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-VQFT5ZCKL2KRMQ3Q
-A KUBE-SERVICES -d 172.30.154.250/32 -p tcp -m comment --comment "argocd/argocd-server:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-A32MGCDFPRQGQDBB
-A KUBE-SERVICES -d 172.30.85.98/32 -p tcp -m comment --comment "openshift-ingress/router-internal-default:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-PIUKAOOLWSYDMVAC
-A KUBE-SERVICES -d 172.30.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 172.30.6.240/32 -p tcp -m comment --comment "openshift-kube-apiserver/apiserver:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-X7YGTN7QRQI2VNWZ
-A KUBE-SVC-A32MGCDFPRQGQDBB -d 172.30.154.250/32 ! -i tun0 -p tcp -m comment --comment "argocd/argocd-server:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -d 172.30.0.1/32 ! -i tun0 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-OGQPOTBHHZMRDA43 -d 172.30.239.126/32 ! -i tun0 -p tcp -m comment --comment "openshift-kube-scheduler/scheduler:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-PIUKAOOLWSYDMVAC -d 172.30.85.98/32 ! -i tun0 -p tcp -m comment --comment "openshift-ingress/router-internal-default:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-R5E3RRBI3OXL227A -d 172.30.254.208/32 ! -i tun0 -p tcp -m comment --comment "harbor/my-harbor-notary-server cluster IP" -m tcp --dport 4443 -j KUBE-MARK-MASQ
-A KUBE-SVC-VQFT5ZCKL2KRMQ3Q -d 172.30.100.183/32 ! -i tun0 -p tcp -m comment --comment "openshift-kube-controller-manager/kube-controller-manager:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-X7YGTN7QRQI2VNWZ -d 172.30.6.240/32 ! -i tun0 -p tcp -m comment --comment "openshift-kube-apiserver/apiserver:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
journal disk 사이즈 설정
[root@okd-1 core]# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda4 120G 46G 75G 38% /
[root@okd-1 core]# journalctl --disk-usage
Archived and active journals take up 4.0G in the file system.
[root@okd-1 core]# journalctl --vacuum-size=3758096384
[root@okd-1 core]# journalctl --vacuum-files=100
Vacuuming done, freed 0B of archived journals from /run/log/journal.
Vacuuming done, freed 0B of archived journals from /var/log/journal.
Vacuuming done, freed 0B of archived journals from /var/log/journal/126376a91cbf47ffab943ee1bddd8398.
Vacuuming done, freed 0B of archived journals from /run/log/journal/c4748c3389184cc596c377b877a5bb0f.
[root@okd-1 core]# journalctl --disk-usage
Archived and active journals take up 3.5G in the file system.
[root@okd-1 core]# journalctl --vacuum-time=1d
[root@bastion ~]# oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.10.0-0.okd-2022-03-07-131213 True True 59s Working towards 4.10.0-0.okd-2022-05-07-021833: 9 of 778 done (1% complete)
first boot 옵션 주기
[root@localhost core]# coreos-installer install /dev/sda -I http://192.168.1.71:8080/ign/bootstrap.ign --insecure-ignition --firstboot-args "rd.neednet=1 ip=192.168.1.128::192.168.1.1:255.255.255.0:bootstrap.okd4.ktdemo.duckdns.org:ens18:none nameserver=192.168.1.71"