cucucool
cucucool

Reputation: 3887

Kubernetes API server connection refused

I am trying to setup Kubernetes cluster using the instruction at https://coreos.com/kubernetes/docs/latest/getting-started.html.

I am in the step 2 (Deploy master) where when I start the master service, the master service is in active status but it cannot communicate with the API server. Also, there are 6 containers started but the logs are empty. Please find the kubelet log below:

    Jan 26 07:54:18 kubernetes-1.novalocal systemd[1]: Started kubelet.service.
    Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: W0126 07:54:20.214551    1115 server.go:585] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
    Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: W0126 07:54:20.214631    1115 server.go:547] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
    Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.217269    1115 plugins.go:71] No cloud provider specified.
    Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.219217    1115 manager.go:128] cAdvisor running in container: "/system.slice/kubelet.service"
    Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.672952    1115 fs.go:108] Filesystem partitions: map[/dev/vda9:{mountpoint:/ major:254 minor:9 fsType: blockSize:0} /dev/vda3:{mountpoint:/usr major:254 minor:3 fsType: blockSize:0} /dev/vda6:{mountpoi
    Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.856238    1115 manager.go:163] Machine: {NumCores:2 CpuFrequency:1999999 MemoryCapacity:4149022720 MachineID:5a493caa9327449cabd050ac6cd2e065 SystemUUID:5A493CAA-9327-449C-ABD0-50AC6CD2E065 BootID:541d
    Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.858067    1115 manager.go:169] Version: {KernelVersion:4.3.3-coreos-r2 ContainerOsVersion:CoreOS 899.5.0 DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:}
    Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.862564    1115 server.go:798] Adding manifest file: /etc/kubernetes/manifests
    Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.862655    1115 server.go:808] Watching apiserver
    Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:21.165506    1115 plugins.go:56] Registering credential provider: .dockercfg
    Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.171563    1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection r
    Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.172329    1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection r
    Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.173114    1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection refused

Also, the following are the containers launched.

    2bf275350996        gcr.io/google_containers/podmaster:1.1      "/podmaster --etcd-se"   26 minutes ago      Up 26 minutes                           k8s_controller-manager-elector.5b0f7cea_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_1f370f8c
    c64042286744        gcr.io/google_containers/podmaster:1.1      "/podmaster --etcd-se"   26 minutes ago      Up 26 minutes                           k8s_scheduler-elector.bc3d71be_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_c9ecb387
    81bd74d0396a        gcr.io/google_containers/hyperkube:v1.1.2   "/hyperkube proxy --m"   26 minutes ago      Up 26 minutes                           k8s_kube-proxy.176f5569_kube-proxy-192.168.111.32_kube-system_8a987aa8c76c4d76bd80ccff5b65ffea_840d8228
    39494ed8e814        gcr.io/google_containers/pause:0.8.0        "/pause"                 27 minutes ago      Up 27 minutes                           k8s_POD.6d00e006_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_36b73b1d
    632dc0a2f612        gcr.io/google_containers/pause:0.8.0        "/pause"                 27 minutes ago      Up 27 minutes                           k8s_POD.6d00e006_kube-apiserver-192.168.111.32_kube-system_86819bf93f678db0ee778b8c8bb658dc_815c6627
    361b297b37f9        gcr.io/google_containers/pause:0.8.0        "/pause"                 27 minutes ago      Up 27 minutes                           k8s_POD.6d00e006_kube-proxy-192.168.111.32_kube-system_8a987aa8c76c4d76bd80ccff5b65ffea_7a6182ed

Upvotes: 1

Views: 5986

Answers (1)

Rob
Rob

Reputation: 2456

These are trying to talk to the insecure version of the API, which shouldn't work between machines. That will only work on the master. Additionally, the master isn't set up to accept work (register_node=false), so it is not expected to report back its status.

The key piece of info we're missing, what machine did that log come from? Did you set the MASTER_HOST= parameter correctly?

The address of the master node. In most cases this will be the publicly routable IP of the node. Worker nodes must be able to reach the master node(s) via this address on port 443.

Also, note this section of the docs:

Note that the kubelet running on a master node may log repeated attempts to post its status to the API server. These warnings are expected behavior and can be ignored. Future Kubernetes releases plan to handle this common deployment consideration more gracefully.

Upvotes: 1

Related Questions