Skip to content
This repository has been archived by the owner on May 13, 2024. It is now read-only.

MCS Demo with 3 k3s clusters by ErieCanal and Karmada

CaiShu edited this page Jan 24, 2023 · 9 revisions

基于这个演示Simple MCS Demo,我们准备了如下这个演示。在这个演示中,我们采用Karmada进行多集群的部署,采用ErieCanal进行多集群的流量调度。相比于Simple MCS Demo,这个演示主要变化是用Karmada进行部署;之前是在每个集群进行部署,很多重复的工作;在使用Karmada之后,没有重复工作,极大降低了工作量,也相应的降低了手动部署引入的拼写等错误。

简单介绍下Karmada,Karmada是华为云团队开源的多集群部署工具。Karmada可以认为是kubefed项目的继承者和延伸。Karmada在标准的k8s api-server上进行了改进,我们称为karmada-apiserver。通过这个这个api server定义的资源(deployment, service等),可以按照一定的策略部署到karmada所管理的k8s集群中。这些资源的定义是兼容标准k8s资源的定义,因此各种资源定义可以直接使用。Karmada引入了PropagationPolicy,具体实现是一个CRD,这个CRD定义了“如何在多个k8s集群上部署资源”,比如部署一个deployment,在第一个集群部署2个实例,在第二个集群部署3个实例等。Karmada的使用,极大的降低了多集群环境下部署的工作量和复杂度。

在这个演示里,我们创建由三个k3s组成的“三集群环境”。在第一个集群中,我们部署karmada控制平面和用于演示的consumer service,这是一个封装的curl,对于一个给定的URL,可以从这个服务的容器里进行curl。等价于attach到容器里,然后手动执行curl。在第二个和第三个集群中,我们部署了Karmada标准演示的nginx服务,这个服务在80端口返回一个nginx的默认页面。在所有的三个集群中,我们部署了erie-canal。erie-canal在部署的时候不区分控制面和执行引擎。演示的预期效果是这样的:在被“karmada + erie-canal”管理的多集群中,一个服务(curl)访问另外两个集群中的服务(nginx)时候,erie-canal可以从集群1调度请求到集群2和集群3,实现跨集群访问。

这个演示的步骤如下,其中有些步骤在Simple MCS DemoKarmada标准演示的nginx服务有具体的细节,这里就直接引用,不再重复记录了。对于这个演示感兴趣的小伙伴如果有任何问题,可以在Erie-Canal项目上开issue,或者在slack和微信群里和我们联系;也可以和Karmada社区联系。

  • 所有演示的功能,我们也正在做到TrafficGuru控制台上,也就是说可以在浏览器里,通过控制台完成整个操作。TrafficGuru已经开源,目前我们正在完成很多细节。欢迎感兴趣的小伙伴关注我们这个项目。

1, 部署3个k3s集群

这个过程细节在这里,如下是可以直接拷贝运行的脚本(演示环境是一个最小化安装的最新版LTS的Debian):

umount /sys/fs/cgroup/cpu\,cpuacct 
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -s -
  • 对于中国用户,可以用这个替代上边安装k3s的命令,使用k3s的中国代理会快很多。
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn INSTALL_K3S_EXEC="--disable traefik" sh -

这一步完成后,我们有了3个k3s集群,每个集群有1个节点。其中cluster1的ip是192.168.10.83;cluster2的ip是192.168.10.82;cluster3的ip是192.168.10.87。

2, 在集群1中部署Karmada控制平面

Karmada的安装我参考了这两个文档:

具体的命令是(在cluster1上执行):

curl -s https://raw.githubusercontent.com/karmada-io/karmada/master/hack/install-cli.sh | sudo bash -s kubectl-karmada
kubectl karmada init
  • 这种安装方法需要有一个稳定的可以访问github的网络
  • 需要留意Karmada安装完成后的提示页面上的信息,需要参考其中信息做一些手工配置

在karmada上注册这三个集群:

kubectl karmada join default --kubeconfig=/etc/karmada/karmada-apiserver.config --cluster-kubeconfig=karmada-cluster1.yaml --cluster-context=default
kubectl karmada join default1 --kubeconfig=/etc/karmada/karmada-apiserver.config --cluster-kubeconfig=karmada-cluster2.yaml --cluster-context=default
kubectl karmada join default2 --kubeconfig=/etc/karmada/karmada-apiserver.config --cluster-kubeconfig=karmada-cluster3.yaml --cluster-context=default

其中用到的karmada-cluster1.yaml如下:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTnpRek9EZ3pNVFl3SGhjTk1qTXdNVEl5TVRFMU1UVTJXaGNOTXpNd01URTVNVEUxTVRVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTnpRek9EZ3pNVFl3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRdkJHYkdoRmRCbFlnbm5WMlYrckJmUFRCUDJDVlNGNU1wRVJFamwveVIKZjRwMHo1YnU3cGpUTUYxa3ZZUkZzT05YS2gxaDhvV0RIdElrbHFBRXYya0lvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTRkRlVoNGxQODRiRTFVTndNWkdKCkhTMWhhd3N3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnS2ladE00N3lrVE5ZckNrNjVqWDFObWdSVURzUm1WdXEKZDBOMkthZGlEZElDSURsNUJuQmUyNjMxZFNBUk9ZRk9BaHpSbjBZRC9YVEc0aXlocTE2Y2hmbCsKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.10.83:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJSGs2WWNXUVp5dWt3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOamMwTXpnNE16RTJNQjRYRFRJek1ERXlNakV4TlRFMU5sb1hEVEkwTURFeQpNakV4TlRFMU5sb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJLOThYVU5pNWZSOEJOaUIKVGxlemZtdVR2NEJRYTFiTkdGWEJMbk5ncUQrUHJLMzBUYnZoZnA0SzdWU0ZvRjQ1eGFqMXFFTmJhRjZRR29magpPSFFGMCtDalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUjhFS1RhVlFhcGhmZGZpQTh3aFRrcUdiMDRkekFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlFQXJ4aHhVQk11c2FpMzExTGRkQnpkR1VIemR2RXdsSmFnYThKTWdVRlMvWklDSUZMelpYazhMUTc5eEhsYwpnM0xSNUV4VVNuTE1BTzc0Z0lYUHR5WUpmNkwrCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUyTnpRek9EZ3pNVFl3SGhjTk1qTXdNVEl5TVRFMU1UVTJXaGNOTXpNd01URTVNVEUxTVRVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUyTnpRek9EZ3pNVFl3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTSkMra0hrSjZsTEZVN1BJVGo4ZFNXZm5DbE1ySm1mOUluamhTQ0xCRDEKYVYyZEIrcEFCZVpscFNCNWQ2bUY4SWt0bkhzeGNVcndHZUpzQWNkWEVrRnJvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWZCQ2sybFVHcVlYM1g0Z1BNSVU1CktobTlPSGN3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnYUZHK29ZUlFvU3o3a0dyQWM3SlhlRzZ0Rmc0U21aQUQKVXNUSnl4dDYxNFlDSUFZQmhjdVdlR052b0h0NlZzNWd5NjdwS0JKNmZISERvRWhNVVhHOEdFQWsKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSU8yY1RiNng2NFhLQlFDaE5KdzVnZmlZVnBiWkZJRHErYnVacTRqOHNTcS9vQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFcjN4ZFEyTGw5SHdFMklGT1Y3TithNU8vZ0ZCclZzMFlWY0V1YzJDb1A0K3NyZlJOdStGKwpuZ3J0VklXZ1hqbkZxUFdvUTF0b1hwQWFoK000ZEFYVDRBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
  • 这个文件是k3s安装后,在/etc/rancher/k3s/k3s.yaml。但是需要修改其中的第五行,修改成几点的IP,而不是默认的127.0.0.1

karmada-cluster2.yaml和karmada-cluster3.yaml如下(供参考):

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTnpRek9UQXdNRGN3SGhjTk1qTXdNVEl5TVRJeU1EQTNXaGNOTXpNd01URTVNVEl5TURBMwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTnpRek9UQXdNRGN3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSV016OEVVbGlPVzl1WlZtSFJnaFdNa1VORWF1TytPL3ZHNU82bWV2ZE8KUkY1UnM3OE8zZVU4QjFJSE9EYWNUVWF6VG9yMlM3V21aYVphaXNzYUplSDlvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXJKa2xKNUhPZnhhQm1OdnNMdWJTCkt1a0FDZW93Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnSzJ1TmFabFUxbDVSV0ZHMVpZa2x4YW92NUovMEZzMlYKNDQ2UU02Z09OazBDSVFDL0c3VytZNDVGbDgrOXJjeU44VkE1bHpvRjNWN2FOOUR5T2JuWTZxa1pBUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.82:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJTTd6cTFZcFEvTDR3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOamMwTXprd01EQTNNQjRYRFRJek1ERXlNakV5TWpBd04xb1hEVEkwTURFeQpNakV5TWpBd04xb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJLbVlmekxyWEsrY1RZd1cKQnBkMmNuUVAvd0Z3alAwM2RvcEgzV2lpZ0V1YVVnTWRIYkVVcGF1Q2hOSnpSSUVKNmVzYWJGMzFEY2x1U0V0QgplV2QrbGUyalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUmlQRjVrSlZtRCt4Zjk1OXZQbW81dkFFc3NzREFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlCbnlGd3B5b2dKUDIwK3JrRnRMZFNaRHBRQzBicSsweTY1RFU5Wm1mS2tPUUloQU04NllwK2hLZVc1N0xwNQpjUmVhUjk4MG9zNGwvNGdzcmJzMzJ6VkE1ZzFhCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUyTnpRek9UQXdNRGN3SGhjTk1qTXdNVEl5TVRJeU1EQTNXaGNOTXpNd01URTVNVEl5TURBMwpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUyTnpRek9UQXdNRGN3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFURVhxZk4zSHllYVR0VHRqbnd6RnVhaWVnK05RWGJDdnpGM1FJczlqMXQKQU81WmV2cUlHVmxDd1FDT0VkTTd4NjR6SW8wZE1ITCtzcG5rdU5ONDE0ZUlvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVVlqeGVaQ1ZaZy9zWC9lZmJ6NXFPCmJ3QkxMTEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnQzEvTE9RdlBNejZVQWJsOXBZUmNqK2UxRkxZbnJWSHIKVlVHNVoxL1BqVlVDSVFEOUh6YWFSa1lrYjFXbGM3SmxjU2prT3RyU1JoUHBRMlNjbWptQk1yTVVwQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpaUS9Wc3MwMkM2Tm45bUN4S3ZMcGErSTRUTFROekxSTEpQeWRrSG8yd1pvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFcVpoL011dGNyNXhOakJZR2wzWnlkQS8vQVhDTS9UZDJpa2ZkYUtLQVM1cFNBeDBkc1JTbApxNEtFMG5ORWdRbnA2eHBzWGZVTnlXNUlTMEY1WjM2VjdRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTnpRME1EQXlORFV3SGhjTk1qTXdNVEl5TVRVeE1EUTFXaGNOTXpNd01URTVNVFV4TURRMQpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTnpRME1EQXlORFV3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFROFl4L0NMZWt2STlJVmhwREUxUFlFcm0yUWF3NWJRTm9KbVdXOTd4TEwKb0tzTlBrT2pENmNwQ3lwZnphSW1oaFVxYlZMRUI3K2R5V1lVSnRVNjA1MFNvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXdjcldwYmhBdjdBVHJmblQ1SVdaCnd2ZmVIenN3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnQlZ6OEJzcllyajdibnhFcWtrb1lpYWFnQ3kvZ1ZSM0UKWTlsK3k4NGIwbDRDSUY4cG95TVN1WGR6dzZWNzBIVDVoUzRLL2JscUFySGs5YUQvb0RoS3lFbFkKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.10.87:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRlZ0F3SUJBZ0lJZnNkckYvME4rTGN3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOamMwTkRBd01qUTFNQjRYRFRJek1ERXlNakUxTVRBME5Wb1hEVEkwTURFeQpNakUxTVRBME5Wb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJQc0FHUXdiWTRFYytzQy8KY1dqNFRpNXc1ZnFFMExDZjJrZ2FoMlorb2UyNU83amlkZG9oNHIwYm1pbzIzazI3RVFKOFplcU5ZemlWRDN0QwpvU3F4TkFLalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUnE3ZjNrU2tLV3VLMkNwTWh0SGRVUEJBbGZwekFLQmdncWhrak9QUVFEQWdOSkFEQkcKQWlFQXQvbXRsaHNqNnR4empRRnh6YmRqNUo5clhtcGZxVk5aMXJlWG85UEtkWXdDSVFERHBSa0FjRk9hTVJDYQpTclRRekZURkN2Ri9mNi9yeWp5Qi9RNGp6VlJQSEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZHpDQ0FSMmdBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwClpXNTBMV05oUURFMk56UTBNREF5TkRVd0hoY05Nak13TVRJeU1UVXhNRFExV2hjTk16TXdNVEU1TVRVeE1EUTEKV2pBak1TRXdId1lEVlFRRERCaHJNM010WTJ4cFpXNTBMV05oUURFMk56UTBNREF5TkRVd1dUQVRCZ2NxaGtqTwpQUUlCQmdncWhrak9QUU1CQndOQ0FBUWVSazY3ZVFDZDl6VGg4V1JOTlF3dUFIUnlEcFRCV3dRV3JlOHpjcUpQClRPVVI2VjkvZnArRGVobTZQdWxxYlR3V3R1OENQTEM3RzN0RFRnRGpTRHVXbzBJd1FEQU9CZ05WSFE4QkFmOEUKQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVhdTM5NUVwQ2xyaXRncVRJYlIzVgpEd1FKWDZjd0NnWUlLb1pJemowRUF3SURTQUF3UlFJaEFLZWIyUDNiR3hYdzU1ZUpTQUI4VFlyVlFVRHI1bVl3Ck1vWXJQWmYvV0x3UkFpQm0xOWl2ZlFsQnNXT0lGREhMUldNdDhKRU0yaDZyQnBlZE9Ja01RR1RWdnc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpxZHIxck9zdTNIaTl6OTc4YXJYendmaWpoaUFxVWE5RTVsQ24rNTV1MzBvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFK3dBWkRCdGpnUno2d0w5eGFQaE9MbkRsK29UUXNKL2FTQnFIWm42aDdiazd1T0oxMmlIaQp2UnVhS2piZVRic1JBbnhsNm8xak9KVVBlMEtoS3JFMEFnPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=

注册成功后,可以查看注册的结果:

root@caishu-tmp:~# kubectl get clusters --kubeconfig=/etc/karmada/karmada-apiserver.config
NAME       VERSION        MODE   READY   AGE
default    v1.25.5+k3s2   Push   True    38h
default1   v1.25.5+k3s2   Push   True    37h
default2   v1.25.5+k3s2   Push   True    37h

3, 在三个集群中部署Erie-Canal

这个过程可以直接参考和复制这个:

成功部署和注册集群后的结果类似这样:

root@caishu-tmp:~# kubectl get pods -n erie-canal
NAME                                       READY   STATUS    RESTARTS   AGE
erie-canal-repo-78b558c9f8-gx9k5           1/1     Running   0          37h
erie-canal-manager-845c558489-9zhj6        1/1     Running   0          37h
erie-canal-ingress-pipy-798c66c595-b7lm8   1/1     Running   0          37h
root@caishu-tmp:~# kubectl get clusters -A
NAME       REGION    ZONE      GROUP     GATEWAY HOST    GATEWAY PORT   MANAGED   MANAGED AGE   AGE
local      default   default   default                   80                                     36h
cluster1   default   default   default   192.168.10.83   80             True      36h           36h
cluster2   default   default   default   192.168.10.82   80             True      36h           36h
cluster3   default   default   default   192.168.10.87   80             True      36h           36h

需要注意的是:

  • 我是在cluster1上操作注册的,因此cluster1就是erie-canal的控制平面
  • 这个注册操作的原理和karmada几乎是一样的:有一个叫做cluster的CRD; 注册就是新增这个CRD的实例。差别是karmada把这些信息存在自己的etcd里边;erie-canal直接用的k3s

4, 在集群1中部署osm-edge

这一步可以参考这个,部署osm-edge,和针对ns启用osm-edge

我们只在cluster1上部署osm-edge,因为这个集群上运行了curl服务。也就是说,当我们需要管理多集群内东西流量的时候,需要在服务调用一侧部署osm-edge。他的工作原理是osm-edge会拦截服务发起DNS查询请求和服务调用REST请求,然后sidecar会根据流量调度的策略路由REST请求到具体的服务提供者;如果服务提供者和服务调用者不在一个集群内,erie-canal会把请求路由到服务提供者所在集群的ingress;并且erie-canal会自动配置ingress规则,路由请求到具体的服务提供者的pod。

如果服务调用方不是在纳管的集群范围之内,那么不需要部署osm-edge。比如从互联网来的请求会直接到ingress。

这里我们简单对比下基于erie-canal的多集群流量调度方案和其他的方案。目前其他的方案主要包括:

  1. 基于istio服务网格的方案。其工作原理和erie-canal的类似
  2. 基于submeriner的方案。submeriner对容器网络有要求。相比之下,erie-canal的方案可以使用任何容器网络,普适性更好

5, 在集群1中部署curl服务

这个操作在这里有提到https://github.com/flomesh-io/ErieCanal/wiki/Simple-MCS-Demo-with-3-k3s-clusters-by-ErieCanal#step-5--enable-osm-on-ns-service-consumer

具体来说就是在cluster1上执行:

kubectl apply -n curl -f https://raw.githubusercontent.com/cybwan/osm-edge-start-demo/main/demo/multi-cluster/curl.curl.yaml

6, 使用Karmada在集群2和集群3中部署nginx服务

这一步我们参考了karmada quickstart文档。除了通过karmada创建nginx deployment,还通过karmada创建了nginx service。用到的命令如下(在cluster1上执行):

kubectl apply -f nginx-deployment.yaml --kubeconfig=/etc/karmada/karmada-apiserver.yaml
kubectl apply -f nginx-svc.yaml --kubeconfig=/etc/karmada/karmada-apiserver.yaml
kubectl apply -f nginx-ppp.yaml --kubeconfig=/etc/karmada/karmada-apiserver.yaml

其中的nginx-deployment.yaml如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx

nginx-svc.yaml如下:

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc-1
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
  selector:
    app: nginx

nginx-ppp.yaml如下:

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx
  placement:
    clusterAffinity:
      clusterNames:
        - default1
        - default2
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - default1
            weight: 1
          - targetCluster:
              clusterNames:
                - default2
            weight: 1
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-svc-propagation
spec:
  resourceSelectors:
    - apiVersion: v1
      kind: Service
      name: nginx-svc-1
  placement:
    clusterAffinity:
      clusterNames:
        - default1
        - default2
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - default1
            weight: 1
          - targetCluster:
              clusterNames:
                - default2
            weight: 1
  • 这里定义了把nginx deployment和nginx svc下发到cluster2和cluster3上

执行完这步以后,可以在cluster2和cluster3上检查deployment和service是否成功创建,比如在cluster2上:

root@caishu-tmp-2:~# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           25h
root@caishu-tmp-2:~# kubectl get svc
NAME          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes    ClusterIP   10.43.0.1     <none>        443/TCP   40h
nginx-svc     ClusterIP   10.43.94.88   <none>        80/TCP    22h

通过karmada部署deployment和service,等效于手动部署服务提供者

7, 在集群2和集群3中为Nginx服务创建ServiceExport

这一步我们参考这个文档。用的yaml是(注意需要在cluster2和cluster3上分别kubectl apply这两个配置):

apiVersion: flomesh.io/v1alpha1
kind: ServiceExport
metadata:
  namespace: default
  name: nginx-svc
spec:
  serviceAccountName: "*"
  rules:
    - portNumber: 80
      path: "/"
      pathType: Prefix
  • 理论上应该可以通过karmada也可以下发这种自定义资源(CRD)的配置

创建好了service export之后,可以同时检查erie-canal自动创建出来的service import资源,参考这个文档

8, 在curl所在集群(集群1)创建GlobalTrafficPolicy

这一步参考这个文档。其中用到的yaml如下(在cluster1上创建):

apiVersion: flomesh.io/v1alpha1
kind: GlobalTrafficPolicy
metadata:
  namespace: default
  name: nginx-svc
spec:
  lbType: ActiveActive

补充说明:

  • 这个流量策略是针对被osm-edge纳管的东西向outbound流量,也就是服务请求方发起的请求
  • erie-canal除了支持这个例子上的active-active策略,还支持failover策略。active-active策略可以简单的认为是基本的round robin的负载均衡。failover策略的功能通常应该搭配karmada灾备方案。对于熟悉传统负载均衡的用户,failover策略近似于“F5 BigIP的GTM在k8s东西流量的应用”。事实上,erie-canal的failover策略模式,已经在商业用户的灾备体系中用于多地多中心多集群的自动流量切换。

9, 访问curl,验证结果

这一步可以参考这个,不过由于cluster2和cluster3上的nginx输出是一样的,不容易区分,因此我是通过观察nginx pod log的方式确认来自curl的流量被erie-canal的active-active策略负载均衡到了cluster2和cluster3的nginx上。