-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow configurable docker network for kind cluster nodes #273
Comments
/kind feature |
We definitely want this, I think we're going to put it in the networking config, and start having an automated default. cc @aojea @neolit123 Strawman:
|
SGTM do we need to have the config field? |
I think we need a field with defaulting to the generated one, that way EG
federation can make multiple clusters on the same network by
actually setting the field
…On Thu, May 2, 2019 at 5:13 PM Lubomir I. Ivanov ***@***.***> wrote:
Strawman:
SGTM
do we need to have the config field?
when a cluster is created we can auto-manage a network with the same name
or prefixed similarly?
e.g.
kind-network-kind
kind-network-mycluster
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#273 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAHADK6PWFVIVH5K5RQVFB3PTN7UBANCNFSM4GUPO6FQ>
.
|
ok, makes sense. |
seems that docker has an option to populate the /etc/hosts file, that can be useful to get rid of the loopback address in the resolv.conf and keep the node name resolution
|
the only problem with |
This turned out to have a few more issues that we expected due to non-default docker networks having different behavior. This may slip to 0.5 as we're nearing the 0.4 release, but it's definitely something we want. |
Hi . ! can someone disambiguate use cases here, between this and the #278 issue ? |
@jayunit100 there are different things regarding networking, one is the CNI plugin used by the kubernetes cluster, kind installs its own CNI by default but you can disable it and install your preferred CNI plugin once kind finish creating the cluster. |
So docker0 is only being used for node IP addresses in the use case for this issue? Thanks for clarifying! Was confused :) . Curious what the use case is for not using docker0 at that level ... after all kind as an abstraction for testing k8s is sufficient as long as the k8s specific stuff isn’t impacted by Dockers impl as a hyper visor for virtual nodes, right? |
Mostly get it now.. maybe change the title of this issue to “use non docker0 interface for kubelet IPs” (although imprecise I think it gets the point across) so that is clear what we mean by cluster :):)... thanks again! The CNI feature for kind is definetly awesome, want to make sure people know that it works as is :). |
There are calico folks using kind for testing, as you can see in this slack conversation https://kubernetes.slack.com/archives/CEKK1KTN2/p1570036710217000 , maybe you can bring up this conversation in our slack channel The main problem of using a custom bridge with docker is that it modifies the DNS behavior, using an embedded dns server https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/ |
Squashed commit of the following: commit 00521c4 Author: keymone <[email protected]> Date: Mon Oct 21 11:43:25 2019 +0100 Allow specifying network name to use in docker provisioner
this would be nice- currently facing an issue of not being able to resolve an internal image registry which is behind my org's vpn. what work is left regarding this? maybe i can help! |
kind uses a specific network now in HEAD ( as it currently stands kind will not delete any networks, so you can just precreate the we need to revisit how that works a bit though WRT IIPv6 in a follow up PR before moving forward. |
#1538 will make it possible to do this. you shouldn't actually need this in nearly all cases though, kind is switching to ensure and use a "kind" network with all of the features of a user defined network. if you pre-create this network it will use it as you configured, it does not delete networks. |
most of the problem initially was just that user defined networks are a breaking change in docker vs the bridge, they have different DNS in ways that don't trivially work with kind. we've fixed that and always use one now. the remaining issues are that completely arbitrary networks can be ... very strange. this network is a standard bridge, with an IPv6 subnet picked out of ULA. |
Instead of using two completely separate subnets (`172.x.0.0/16` is used by Kind and I had arbitrarily chosen `192.168.2.0/24` for MetalLB), use a custom Docker network for `kind` instead. I needed to create the Docker network myself rather than letting `kind` do it because otherwise the subnet IP range will not necessarily be fixed (see https://github.com/kubernetes-sigs/kind/blob/add83858a0addea5899a88003a598399a8a36747/pkg/cluster/internal/providers/docker/network.go#L94). To allow `kind` to use the custom Docker network, I am relying on `KIND_EXPERIMENTAL_DOCKER_NETWORK`. See kubernetes-sigs/kind#273 and kubernetes-sigs/kind#1538. For documentation on `docker network create`, see https://docs.docker.com/engine/reference/commandline/network_create/. | Subnet | Usage | |------------------|----------------------| | `172.100.0.0/24` | Kubernetes nodes | | `172.100.1.0/24` | MetalLB address pool |
Hi @BenTheElder, I created a bridge which is similar to kind, just have the different "IPAM", I also set the env variable KIND_EXPERIMENTAL_DOCKER_NETWORK with the bridge. When issuing "kind create cluster ..." and saw the following stdout on the screen.
After the kind cluster was created, it still uses "kind" network, I tried to identify why and didn't get any clues, did you know why? Thanks! |
@redbrick9 It works as specified.
A new network got created in docker.
|
Any chance we get this functionality as a |
This is pretty bug prone and you can pre-create the kind network with your (unsupported, potentially broken) settings instead. |
I kindly (pun intended) disagree. I am using the
This allows me to run my minikube based kubernetes clusters (plural) in any docker networks (plural) that I pre-configured, or even allows me to create a bridged docker network through minikube itself. I am not asking here to start managing docker bridged networks, as the The main use cases for me to use |
subnets come from docker IPAM settings which are already user configurable OR you can create the
https://kind.sigs.k8s.io/docs/contributing/project-scope/ Anyhow you can connect to additional networks with To change the default network in an experimental, unsupported way you can use
There's demos of this sort of thing in the kubernetes project using KIND with the existing functionality https://github.com/kubernetes-sigs/mcs-api/blob/master/scripts/up.sh
This is not true. See for example #2917 |
or people can create their own plugins https://github.com/aojea/kind-networking-plugins |
Regarding #2917 The only reason people connect to a second network, is because they were forced to by the arbitrary choice of hard coding a bridged docker network Anyway... the experimental flag works like a charm and covers my use case. It's a bit strange you refuse to make this a first class command line flag... Hard coding choices like the name and choice of a docker network is bad software design, but I'll leave it there. FWIW... hereby my attempt to create a single abstraction layer for my multi cluster needs, having support for minikube/k3s/kind, where kind is the only one going "experimental" |
This is an example of the challenging bugs that crop up due to users with custom networking that we're not supporting. We simply can't prioritize that. Which is why the existing feature is clearly named "EXPERIMENTAL" and will stay that way for now.
Frankly, this approach is not helpful and I'm disinclined to spend further energy here. The design and implementation is not "arbitrary" just because you have not looked into the history and context behind it. KIND used the default docker bridge for the first year before we ran into serious limitations exploring proposed fixes for clusters surviving host reboots, which was NOT an originally intended functionality we even tested because KIND was created to test Kubernetes, NOT to test applications. But there was high user demand anyhow and minikube hadn't adopted the kind image yet and k3d didn't exist, so we spent a lot of effort adapting to the demands for long lived application development clusters. In the process we settled on a substitute for the standard docker bridge network that closely mimics it with the minimum of changes and because we have to configure it somewhat it is under the predictable "kind" name for running test containers alongside it and otherwise behaving very closely to before this change.
minikube is a sister project in the same organization, it is explicitly not a goal to attempt to create 100% overlap between them. KIND is a lightweight tool focused on developing bleeding edge Kubernetes with a secondary focus on usage for other functionality, which you can find more about in our contributing guide / docs: It is important to our existing users and use cases that the tool remain small and well maintained and keep up with the latest changes in the container ecosystem, Linux, and Kubernetes, which is where most of our energy goes, e.g. #3223.
Again, you haven't bothered to look at how we settled on the current approach and you're being rude.
Again:
|
This issue is closed. If anyone would like to propose a new feature with a considered design proposal: https://kind.sigs.k8s.io/docs/contributing/getting-started/ To be considered for addition it will also first need concrete use cases that cannot be handled with existing functionality. AFAIK there aren't really any, e.g. joshuaspence/homelab@72b9038 references this but that can entirely be accomplished on the standard network instead (will leave a comment), multi-cluster testing is also referenced but that works fine on a single bridge network. A few seed questions for anyone that does choose to explore this:
|
It would be nice to be able to specify the network that the cluster uses.
The text was updated successfully, but these errors were encountered: