onedsc
onedsc

Reputation: 1

DNS Addon Kubernetes CentOS 7 Cluster

I have been struggling getting the DNS addon working on a CentOS 7.2 cluster. I installed the cluster using the directions here: http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-services

In this configuration the master is running: etcd, kube-scheduler, kube-apiserver, and the kube-controller-manager. The nodes are running: docker, kubelet and kube-proxy and flanneld. The cluster is working fine in this configuration. Pods, Servives are all working. The next step is trying to enable DNS.

Note: This cluster is not using certificates for authentication.

There are several "guides" for how to do this, but none of them seem to work on this type of cluster.

First can you please help me clear up some confusion. Where do the dns addon containers run?

Here is what I have tried so far:

Kubernetes Version: Vanilla install from yum.

# kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0"  GitCommit:"a4463d9a1accc9c61ae90ce5d314e248f16b9f05", GitTreeState:"clean"}    
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"a4463d9a1accc9c61ae90ce5d314e248f16b9f05", GitTreeState:"clean"}

In the sky-dns.yaml file below I have replaced the template variables with 1 replica set, set the DNS_DOMAIN to "cluster.local". I added one more command line to the "/kube-dns" container "--kube-master-url=http://10.2.1.245:8080" per some of the suggestions here on StackOverflow.

SkyDNS-rc.yaml (pointing to v18 of kube-dns)

apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v18
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v18
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kube-dns
    version: v18
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v18
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubedns
        image: gcr.io/google_containers/kubedns-amd64:1.6
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 30
          timeoutSeconds: 5
        args:
        # command = "/kube-dns"
        - --domain=cluster.local
        - --dns-port=10053
        - --kube-master-url=http://10.2.1.245:8080
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
      - name: dnsmasq
        image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
        args:
        - --cache-size=1000
        - --no-resolv
        - --server=127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      - name: healthz
        image: gcr.io/google_containers/exechealthz-amd64:1.0
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
        - -port=8080
        - -quiet
        ports:
        - containerPort: 8080
          protocol: TCP
      dnsPolicy: Default  # Don't use cluster DNS.

On each of the nodes (master and 3 minions) I have updated the /etc/kubernetes/conf file adding the DSN section at the end (full file posted for completeness).

Do I need to add these if I am using the replication controller above?

/etc/kubernetes/conf

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"

# DNS Add-on
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="10.254.100.1"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1

Here is what I am seeing when deploying KubeDNS.

[root@dcook-kube-c1 dcook]# kubectl create -f kube-fun/skydns-rc.yaml
replicationcontroller "kube-dns-v18" created

[root@dcook-kube-c1 dcook]# kubectl get rc kube-dns-v18 --namespace kube-system
NAME           DESIRED   CURRENT   AGE
kube-dns-v18   1         1         34s

[root@dcook-kube-c1 dcook]# kubectl get pods --namespace kube-system
NAME                 READY     STATUS             RESTARTS   AGE
kube-dns-v18-cx4ir   3/3       Running            0          46s

Logs:

[root@dcook-kube-c1 dcook]# kubectl logs --namespace="kube-system" kube-dns-v18-cx4ir kubedns
I0726 20:17:52.675064       1 server.go:91] Using http://10.2.1.245:8080 for kubernetes master
I0726 20:17:52.676138       1 server.go:92] Using kubernetes API v1
I0726 20:17:52.676498       1 server.go:132] Starting SkyDNS server. Listening on port:10053
I0726 20:17:52.676815       1 server.go:139] skydns: metrics enabled on :/metrics
I0726 20:17:52.676836       1 dns.go:166] Waiting for service: default/kubernetes
I0726 20:17:52.677584       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0726 20:17:52.677604       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0726 20:17:52.867455       1 server.go:101] Setting up Healthz Handler(/readiness, /cache) on port :8081
I0726 20:17:52.867843       1 dns.go:660] DNS Record:&{10.254.0.1 0 10 10  false 30 0  }, hash:63b49cf0
I0726 20:17:52.867898       1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 443 10 10  false 30 0  }, hash:c3f6ae26
I0726 20:17:52.868048       1 dns.go:660] DNS Record:&{kubernetes.default.svc.cluster.local. 0 10 10  false 30 0  }, hash:b9b7d845
I0726 20:17:52.868103       1 dns.go:660] DNS Record:&{10.254.91.7 0 10 10  false 30 0  }, hash:9b59fd9c
I0726 20:17:52.868137       1 dns.go:660] DNS Record:&{my-nginx.default.svc.cluster.local. 0 10 10  false 30 0  }, hash:b0f41a92

[root@dcook-kube-c1 dcook]# kubectl logs --namespace="kube-system" kube-dns-v18-cx4ir healthz
2016/07/26 20:17:11 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:10.667247682 +0000 UTC, error exit status 1
2016/07/26 20:17:21 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:20.667213321 +0000 UTC, error exit status 1
2016/07/26 20:17:31 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:30.667225804 +0000 UTC, error exit status 1
2016/07/26 20:17:41 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:40.667218056 +0000 UTC, error exit status 1
2016/07/26 20:17:51 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-07-26 20:17:50.667724036 +0000 UTC, error exit status 1

Upvotes: 0

Views: 1593

Answers (1)

puja
puja

Reputation: 317

You are missing a service that exposes your pod(s): https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/skydns-svc.yaml.in

There you set the ClusterIP, which you then need to use when you start the kubelets.

You need to start the kubelets with --cluster_dns=<the IP you used in the service> --cluster_domain=cluster.local

Also, I would update your rc YAML to the most recent version (v19) analog to what you see here: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/skydns-rc.yaml.in

Upvotes: 1

Related Questions