Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: add kube-vip toleration to match node taint in harvester-cloud-provider #202

Merged
merged 1 commit into from
Jan 15, 2024

Conversation

Yu-Jack
Copy link
Contributor

@Yu-Jack Yu-Jack commented Jan 11, 2024

Description

After setting up guest cluster with this configuration (control-plane + ETCD separate from worker in different pool) mentioned in Test Plan.

When you create load balancer type service with harvester cloud provider annotation, the load balancer type service EXTERNAL_IP is stuck in pending status

After discussion in harvester/harvester#4678, the reason is kube-vip toleration doesn't matched any node taint.

Solution

Add one more toleration in kube-vip DaemonSet. It should works in all cases listed in Test Plan.

Bump PR is #203

Issue

harvester/harvester#4678

Test Plan

Common set up

  1. We choose harvester cloud provider.
    image
  2. Set up cloud-config in each node for later usage because we need to reinstall harvester-cloud-provider manually
write_files:
  - encoding: b64
    content: {harvester's kube config, the cluster namespace should be same as the pool you created (base64 enconded)}
    owner: root:root
    path: /etc/kubernetes/cloud-config
    permission: '0644'
  1. After pools are created, we remove the harvester-cloud-provider in Apps > Installed Apps (kube-system namesapce).
    image
  2. We add new charts in Apps > Repositories.
    Use https://charts.harvesterhci.io to install and select 0.2.3.
  3. Go to Apps > Charts to install harvester-cloud-provider

Different Guest Cluster Pool

  1. control-plane, ETCD and worker in same pool
  2. control-plane and ETCD in A pool, worker in B pool
  3. control-plane in A pool, ETCD in B pool and worker in C pool

After install harvester-cloud-provider, DaemonSet kube-system/kube-vip should run in above three cases in guest cluster. Then create following load balance type service to check EXTERNAL_IP is assigned or not.

image

p.s. check helloworldserver-lb service in the guest cluster, GUI and command are both okay.

# example svc
apiVersion: v1
kind: Service
metadata:
  name: helloworldserver-lb
  annotations:
    cloudprovider.harvesterhci.io/ipam: dhcp
    cloudprovider.harvesterhci.io/healthcheck-port: "80"
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
    - name: test
      protocol: TCP
      port: 8088
      targetPort: 80

@Yu-Jack
Copy link
Contributor Author

Yu-Jack commented Jan 11, 2024

If I need to bump version, that means I need to open another PR to release branch?

@bk201
Copy link
Member

bk201 commented Jan 12, 2024

Can we skip the tarball? I believe the content is the same, just timestamp is different.

@Yu-Jack
Copy link
Contributor Author

Yu-Jack commented Jan 12, 2024

@bk201 Actually, the tarball is different from previous version. You could extract the tar to compare. Only difference is the DaemonSet toleration.

@bk201
Copy link
Member

bk201 commented Jan 15, 2024

Thanks, If the tarball content is different, should we bump its version?

@Yu-Jack
Copy link
Contributor Author

Yu-Jack commented Jan 15, 2024

Thanks, If the tarball content is different, should we bump its version?

@bk201 Yeah, after checking, I think we need to bump its version. But, I'm not sure how do you update the gh-pages branch index.html. I run helm repo index harvester-cloud-provider, however, its result seems different from gh-pages branch index.html. Did I miss something?

Okay, it uses chart-releaser-action to release.

@Yu-Jack
Copy link
Contributor Author

Yu-Jack commented Jan 15, 2024

BTW, it seems I need to open another PR to release branch for bumping the harvester-cloud-provider version to 0.2.3, right?

Copy link
Member

@bk201 bk201 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@bk201 bk201 self-requested a review January 15, 2024 06:54
@bk201
Copy link
Member

bk201 commented Jan 15, 2024

Let's try overriding kube-vip's toleration in cloud-provider's value file:


With this way, I guess we don't need to bump the dependency chart.

@Yu-Jack
Copy link
Contributor Author

Yu-Jack commented Jan 15, 2024

@bk201 After re-run template with only modifying the value.yaml of parents, and it seems well.

image

And test case is successful.

@Yu-Jack
Copy link
Contributor Author

Yu-Jack commented Jan 15, 2024

override references: Helm | Subcharts and Global Values

@w13915984028
Copy link
Member

Just planned to fix this issue, but #202 and #203 have already been ready for merge. thanks @Yu-Jack

@bk201 Let's merge those 2 PR, then I will rebase rancher/charts#3316 with v0.2.3

@w13915984028 w13915984028 self-requested a review January 15, 2024 10:48
Copy link
Member

@w13915984028 w13915984028 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks.

@w13915984028 w13915984028 merged commit 54d9b76 into harvester:master Jan 15, 2024
2 checks passed
@Yu-Jack Yu-Jack deleted the fix-4768 branch January 17, 2024 07:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants