On this page we will document problems that we encountered when moving to a proper k8s setup. The idea is to collect them and send them upstream to the respective projects so that it might be improved in the future.
These findings are not yet reported upstream.
Traefik needs some setup before it can work with Let’s Encrypt. Unfortunately there is no particular documentation neither in K3s nor in the Traefik Helm chart on how to exactly achieve that.
Traefik "native" solution:
Traefik needs a certificate resolver. The only way to do this with the helm cart is to add additionalArguments (global Traefik params).
So you need to add a HelmChartConfig
and apply it to the cluster. This will reconfigure Traefik automatically.
Example:
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
ports:
web:
redirectTo: websecure
additionalArguments:
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
- --certificatesresolvers.default.acme.httpchallenge=true
- --certificatesresolvers.default.acme.httpchallenge.entrypoint=web
- [email protected]
- --certificatesresolvers.default.acme.storage=/data/acme.json
As soon as you save the file, k3s will apply it and Traefik will be restarted. In the next step all ingress resources need to be annotated with
annotations:
traefik.ingress.kubernetes.io/router.tls.certresolver: default
This is a tutorial, not an issue that needs reporting.
-
Setup a OAuth client on a OIDC provider (FAF uses Ory Hydra)
Add the following options to the kube-apiserver (use matching values, watch out for trailing slashes / exact matches):--oidc-issuer-url=https://hydra.test.faforever.com/ --oidc-client-id=faf-k8s
NixOS servers declare this in
services.k3s.extraFlags = toString [ "--kube-apiserver-arg=oidc-issuer-url=https://hydra.test.faforever.com/" "--kube-apiserver-arg=oidc-client-id=faf-k8s" ];
-
Create a ClusterRoleBinding such as
kubectl create clusterrolebinding oidc-cluster-admin --clusterrole=cluster-admin --user='https://hydra.test.faforever.com/#76365'
-
The user identifier is built out of
<issuer-url>#<OAuth ID token subject>
. In the example above76365
is a FAF user id.
-
Not all of these findings are reported upstream yet.
-
All configuration and secrets are in one file:
config.json
. Therefore we can’t use regular split of config maps and secrets and everything must be a secret. -
The config.json can not be mounted into the pod from a secret. The installer fails on doing so.
warn: NodeBB Setup Aborted. Error: EROFS: read-only file system, open '/usr/src/app/config.json'
-
Workaround: Build the
config.json
via script from an init container out of configmap and secret values. -
Installing and activating plugins inside the container cannot be done before startup of the container and requires to build processes.
We made pull request #10036 to improve this.