Reputation: 46
I am trying to follow the example in this blog post to supply my pods with upstream DNS servers.
I created a new GKE cluster in us-east1-d (where 1.6.0 is available according to the April 4 entry here).
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
I Then defined a ConfigMap in the following YAML file, kube-dns-cm.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["1.2.3.4"]
When I try to create the ConfigMap, I am told it already exists:
$ kubectl create -f kube-dns-cm.yml
Error from server (AlreadyExists): error when creating "kube-dns-cm.yml": configmaps "kube-dns" already exists
I tried deleting the existing ConfigMap and replacing it with my own, but when I subsequently create pods, they do not seem to have taken effect (names are not resolved as I hoped). Is there more to the story than is explained in the blog post (e.g., restarting the kube-dns service, or the pods)? Thanks!
EDIT: Deleting and recreating the ConfigMap actually does work, but not entirely the way I was hoping. My docker registry is on a private (corporate) network, and resolving the name of the registry requires the upstream nameservers. So I cannot use the DNS name of the registry on my pod yaml file, but the pods created from that yaml file will have the desired DNS resolution (after the ConfigMap is replaced)
Upvotes: 0
Views: 3851
Reputation: 11
kube-dns
pod applied with recent configMap changes after it is restarted. But, still got the similar issue.
Check logs using kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
Tried changing --cluster-dns
value to hosts server "nameserver" ip which is already connecting for me both internal and external network. Restarted kubelet service for changes to take effect.
Environment="KUBELET_DNS_ARGS=**--cluster-dns=<<your-nameserver-resolver>>** --cluster-domain=cluster.local"
in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
That works !!! Now containers able to connect external network.
Upvotes: 1
Reputation: 412
I believe your ConfigMap
config is correct, while you do need restart kube-dns
.
Because ConfigMap
is injected into pod by either env
pairs or volumes(kube-dns
use volume), you need to restart the target pod(here I mean all the kube-dns
pod) to make it take effect.
Another suggestion is that, from you description, you only want this upstream dns help you access your docker registry which is in private network, then I suggest you may want to try stubDomains
in that post you referred.
And most important, you need to check you dnsPolicy
is set to ClusterFirst
, although from your description, I guess it is.
Hope this will help you out.
Upvotes: 1
Reputation: 1755
It has been my experience with ConfigMap
that when I update it, the changes don't seem to reflect in the running pod that is using the ConfigMap
.
So when I delete the pod and when pod comes up again it shows the latest ConfigMap
.
Hence I suggest try restarting the kube-dns
pod, which seems to use this ConfigMap
you created.
Upvotes: 1