Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION/HELP] DNS-Resolution inside Pods is not working (related to CoreDNS File-Plugin?) #1464

Closed
mschreiber-npo opened this issue Jul 8, 2024 · 2 comments
Labels
question Further information is requested

Comments

@mschreiber-npo
Copy link

DNS-Resolution of external domains not working inside pods

Since today (2024-07-08) I've a problem with DNS-Resolution inside pods running in a k3d-cluster.

But as soon as I comment-out the file-plugin inside the coredns-custom ConfigMap the problem goes away.
But without the file-plugin the dns-resolution of host.k3d.internal cannot work.

K3D Version:

❯ k3d version
k3d version v5.7.0
k3s version v1.29.6-k3s1 (default)

The Problem:

I can't resolve any external domain, here is an example with google.com

❯ kubectl exec -i -t dnsutils -- nslookup google.com
Server:	10.43.0.10
Address:	10.43.0.10#53

** server can't find google.com: NXDOMAIN

But host.k3d.internal is working just fine (which means coredns is doing it's thing):

❯ kubectl exec -i -t dnsutils -- nslookup host.k3d.internal
Server:	10.43.0.10
Address:	10.43.0.10#53

Name:	host.k3d.internal
Address: 172.21.0.1

How to "fix" the Problem:

After a bunch of try and error (because i really don't know coredns that well). It turned out that the problem seams to be rooted inside the coredns configuration. When I remove the file-plugin inside the coredns-custom-ConfigMap it is working again:

  1. Proof it does not work
❯ kubectl exec -i -t dnsutils -- nslookup google.com
Server:	10.43.0.10
Address:	10.43.0.10#53

** server can't find google.com: NXDOMAIN

command terminated with exit code 1
  1. Check coredns-custom ConfigMap with file-plugin
❯ kubectl -n kube-system describe cm coredns-custom
Name:         coredns-custom
Namespace:    kube-system
Labels:       objectset.rio.cattle.io/hash=a3e4960ef9f39950a366d81f48be07a01f218c1e
Annotations:  objectset.rio.cattle.io/applied:
                H4sIAAAAAAAA/4yPQevTQBBHv8oy52SzaWpsAoJ/PImoB8GTl8nuJF2TzJSdbURKv7sERQSp/o/D8Hu8dwO8xM+UNApDD1sNBQTMCP0NMISYozAuZWC1YYAeXpumdc68/WA+fXwyaJ...
              objectset.rio.cattle.io/id:
              objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
              objectset.rio.cattle.io/owner-name: coredns-custom
              objectset.rio.cattle.io/owner-namespace: kube-system

Data
====
additional-dns.db:
----
@ 3600 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2024061200 1800 900 604800 86400
host.k3d.internal IN A 172.21.0.1
k3d-test-server-0 IN A 172.21.0.2
k3d-test-serverlb IN A 172.21.0.3

hosts.override:
----
file /etc/coredns/custom/additional-dns.db


BinaryData
====

Events:  <none>
  1. Edit coredns-custom ConfigMap and comment-out the file-plugin
❯ kubectl -n kube-system edit cm coredns-custom
configmap/coredns-custom edited
  1. Proof file-plugin is commented-out in coredns-custom configmap
❯ kubectl -n kube-system describe cm coredns-custom
Name:         coredns-custom
Namespace:    kube-system
Labels:       objectset.rio.cattle.io/hash=a3e4960ef9f39950a366d81f48be07a01f218c1e
Annotations:  objectset.rio.cattle.io/applied:
                H4sIAAAAAAAA/4yPQevTQBBHv8oy52SzaWpsAoJ/PImoB8GTl8nuJF2TzJSdbURKv7sERQSp/o/D8Hu8dwO8xM+UNApDD1sNBQTMCP0NMISYozAuZWC1YYAeXpumdc68/WA+fXwyaJ...
              objectset.rio.cattle.io/id:
              objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
              objectset.rio.cattle.io/owner-name: coredns-custom
              objectset.rio.cattle.io/owner-namespace: kube-system

Data
====
additional-dns.db:
----
@ 3600 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2024061200 1800 900 604800 86400
host.k3d.internal IN A 172.21.0.1
k3d-test-server-0 IN A 172.21.0.2
k3d-test-serverlb IN A 172.21.0.3

hosts.override:
----
#file /etc/coredns/custom/additional-dns.db


BinaryData
====

Events:  <none>
  1. Restart CoreDNS
❯ kubectl -n kube-system rollout restart deployment coredns
deployment.apps/coredns restarted
  1. Test Again (working)
❯ kubectl exec -i -t dnsutils -- nslookup google.com
Server:	10.43.0.10
Address:	10.43.0.10#53

Non-authoritative answer:
Name:	google.com
Address: 172.217.16.206

How to Reproduce:

  1. Create new Cluster
❯ k3d cluster create test
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-test'
INFO[0000] Created image volume k3d-test-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-test-tools'
INFO[0001] Creating node 'k3d-test-server-0'
INFO[0001] Creating LoadBalancer 'k3d-test-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.21.0.1 address
INFO[0001] Starting cluster 'test'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-test-server-0'
INFO[0005] All agents already running.
INFO[0005] Starting helpers...
INFO[0005] Starting node 'k3d-test-serverlb'
INFO[0011] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0013] Cluster 'test' created successfully!
INFO[0013] You can now use it like this:
kubectl cluster-info

2.Create POD with DNS-Utils (for Testing)

❯ kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
pod/dnsutils created

3.1. Lookup google.com using nslookup inside dnsutils-pod:

❯ kubectl exec -i -t dnsutils -- nslookup google.com
Server:	10.43.0.10
Address:	10.43.0.10#53

** server can't find google.com: NXDOMAIN

3.2. Lookup host.k3d.internal using nslookup inside dnsutils-pod:

❯ kubectl exec -i -t dnsutils -- nslookup host.k3d.internal
Server:	10.43.0.10
Address:	10.43.0.10#53

Name:	host.k3d.internal
Address: 172.21.0.1

Workaround:

As a workaround i've added the hosts from the zone-file inside coredns-custom configmap to the host-plugin inside coredns configmap:

❯ kubectl -n kube-system edit cm coredns
configmap/coredns edited
❯ kubectl -n kube-system describe cm coredns
Name:         coredns
Namespace:    kube-system
Labels:       objectset.rio.cattle.io/hash=bce283298811743a0386ab510f2f67ef74240c57
Annotations:  objectset.rio.cattle.io/applied:
                H4sIAAAAAAAA/4yQwWrzMBCEX0Xs2fEf20nsX9BDybH02lMva2kdq1Z2g6SkBJN3L8IUCiVtbyNGOzvfzoAn90IhOmHQcKmgAIsJQc+wl0CD8wQaSr1t1PzKSilFIUiIix4JfRoXHQ...
              objectset.rio.cattle.io/id:
              objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
              objectset.rio.cattle.io/owner-name: coredns
              objectset.rio.cattle.io/owner-namespace: kube-system

Data
====
Corefile:
----
.:53 {
    errors
    health
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
      pods insecure
      fallthrough in-addr.arpa ip6.arpa
    }
    hosts /etc/coredns/NodeHosts {
      ttl 60
      reload 15s
      fallthrough
    }
    prometheus :9153
    forward . /etc/resolv.conf
    cache 30
    loop
    reload
    loadbalance
    import /etc/coredns/custom/*.override
}
import /etc/coredns/custom/*.server

NodeHosts:
----
172.21.0.2 k3d-test-server-0
172.21.0.1 host.k3d.internal
172.21.0.2 k3d-test-server-0
172.21.0.3 k3d-test-serverlb


BinaryData
====

Events:  <none>
❯ kubectl -n kube-system rollout restart deployment coredns
deployment.apps/coredns restarted
❯ kubectl exec -i -t dnsutils -- nslookup google.com
Server:	10.43.0.10
Address:	10.43.0.10#53

Non-authoritative answer:
Name:	google.com
Address: 142.250.181.238
❯ kubectl exec -i -t dnsutils -- nslookup host.k3d.internal
Server:	10.43.0.10
Address:	10.43.0.10#53

Name:	host.k3d.internal
Address: 172.21.0.1
@mschreiber-npo mschreiber-npo added the question Further information is requested label Jul 8, 2024
@iwilltry42
Copy link
Member

Hey 👋
This was fixed (feature reverted) in v5.7.1 (see #1462)

If you think this is a different issue or you're still facing this with v5.7.1, feel free to reopen 👍

@mschreiber-npo
Copy link
Author

Hey 👋 This was fixed (feature reverted) in v5.7.1 (see #1462)

If you think this is a different issue or you're still facing this with v5.7.1, feel free to reopen 👍

Thank you very much! I'll try again with the newer Version !
Sorry for the inconvenience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants