Reputation: 1259
When i run the kubectl version command , I get the following error message.
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
How do I resolve this?
Upvotes: 79
Views: 198150
Reputation: 35
I will post the simplest solution:
Go to terminal and type:
minikube status
If it is stopped then type :
minikube start
Done
Upvotes: 1
Reputation: 40603
I had the same issue when I tried use kubrnetes installed with Docker. It turned out that it was not enbled by default.
First I enabled kubrnetes in Docker options and then I changed context for docker-for-desktop
kubectl config get-contexts
kubectl config use-context docker-desktop
It solved the issue.
Upvotes: 17
Reputation: 191
I face the same issue, it might be your ip was not added into authorize network list in the Kubernetes Cluster. Simply navigate to:
GCP console -> Kubernetes Engine -> Click into the Clusters you wish to interact with
In the target Cluster page look for:
Control plane authorized networks -> click pencil icon -> Add Authorized Network
Add your External Ip with a CIDR suffix of /32 (xxx.xxx.xxx.xxx/32).
One way to get your external IP on terminal / CMD:
curl -4 ifconfig.co
Upvotes: 0
Reputation: 182
Step-1: Run command to see all list of context:
kubectl config view
Step-2: Now switch your context where you want to work.
kubectl config use-context [context-name]
For example:
kubectl config use-context docker-desktop
Upvotes: 0
Reputation: 351
Adding this here so it can help someone with a similar problem.
In our case, we had to configure our VPC network to export its custom routes for VPC peering βgke-jn7hiuenrg787hudf-77h7-peerβ in project ββ to the control plane's VPC network.
The control plane's VPC network is already configured to import custom routes. This provides a path for the control plane to send packets back to on-premise resources.
Upvotes: 0
Reputation: 1600
I have two contexts and I got this error when I was in the incorrect one of the two so I switched the context and this error was resolved.
To see your current context: kubectl config current-context
To see the contexts you have: kubectl config view
To switch context: kubectl config use-context context-cluster-name
Upvotes: 0
Reputation: 18373
You can get relevant information about the client-server status by using the following command.
kubectl config view
Now you can update or set k8s context accordingly with the following command.
kubectl config use-context CONTEXT-CHOSEN-FROM-PREVIOUS-COMMAND-OUTPUT
you can do further action on kubeconfig file. the following command will provide you with all necessary information.
kubectl config --help
Upvotes: 62
Reputation: 8718
My problem was that I use 2 virtual networks on my VM. The network which Kubernetes uses is always the one of the Default Gateway. However the communication network between my VMs was the other one.
You can force Kubernetes to use a different network by using the folowing flags:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-cert-extra-sans=xxx.xxx.xxx.xxx --apiserver-advertise-address=xxx.xxx.xxx.xxx
Change the xxx.xxx.xxx.xxx with the commmunication IP address of your K8S master.
Upvotes: 0
Reputation: 375
This problem occurs because of minikube. Restart minikube will solve this problem.Run below command and it will work-
minikube stop
minikube delete
minikube start
Upvotes: 12
Reputation: 1549
If you are using azure and have recently changed your password try this:
az account clear
az login
After logging in successfully:
az aks get-credentials --name project_name --resource-group resource_group_name
Now when you run
kubectl get nodes
you should see something. Also, make sure you are using the correct kubectl context.
Upvotes: 0
Reputation: 13085
Was facing the same problem with accessing GKE master from Google Cloud Shell.
Then I followed this GCloud doc to solve it.
Open GCloud Shell
Get External IP of the current GCloud Shell with:
dig +short myip.opendns.com @resolver1.opendns.com
Add this External IP into the "Master authorized networks" section of the GKE cluster - with a CIDR suffix of /32
After that, running kubectl get nodes
from the GCloud Shell worked right away.
Upvotes: 4
Reputation: 480
I was facing the same issue on Ubuntu 18.04.1 LTS.
The solution provided here worked for me.
Just putting the same data here:
Get current cluster name and Zone:
gcloud container clusters list
Configure Kubernetes to use your current cluster:
gcloud container clusters get-credentials [cluster name] --zone [zone]
Hope it helps.
Upvotes: 14
Reputation: 136
I got similar problem when I run
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
And here's how I tried and finally worked.
I installed Docker Desktop on Mac (Version 2.0.0.3) firstly. Then I installed the kubectl with command
$ brew install kubectl
.....
==> Pouring kubernetes-cli-1.16.0.high_sierra.bottle.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink bin/kubectl
Target /usr/local/bin/kubectl
already exists. You may want to remove it:
rm '/usr/local/bin/kubectl'
To force the link and overwrite all conflicting files:
brew link --overwrite kubernetes-cli
To list all files that would be deleted:
brew link --overwrite --dry-run kubernetes-cli
Possible conflicting files are:
/usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl
.....
That doesn't matter, we have already got the kubectl. Then I install minikube with command
$ brew cask install minikube
...
==> Linking Binary 'minikube-darwin-amd64' to '/usr/local/bin/minikube'.
πΊ minikube was successfully installed!
start minikube first time (VirtualBox not installed)
$ minikube start
π minikube v1.4.0 on Darwin 10.13.6
πΏ Downloading VM boot image ...
> minikube-v1.4.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
> minikube-v1.4.0.iso: 135.73 MiB / 135.73 MiB [-] 100.00% 7.75 MiB p/s 18s
π₯ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
π Retriable failure: create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
...
π£ Unable to start VM
β Error: [VBOX_NOT_FOUND] create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
π‘ Suggestion: Install VirtualBox, or select an alternative value for --vm-driver
π Documentation: https://minikube.sigs.k8s.io/docs/start/
βοΈ Related issues:
βͺ https://github.com/kubernetes/minikube/issues/3784
Install VirtualBox, then start minikube second time (VirtualBox installed)
$ minikube start
π 13:37:01.006849 35511 cache_images.go:79] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 failed: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
E1002 13:37:33.632298 35511 start.go:706] Error caching images: Caching images for kubeadm: caching images: caching image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
β Unable to load cached images: loading cached images: loading image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: stat /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: no such file or directoryminikube v1.4.0 on Darwin 10.13.6
π₯ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
E1002
πΎ Downloading kubeadm v1.16.0
πΎ Downloading kubelet v1.16.0
π Pulling images ...
π Launching Kubernetes ...
π£ Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
πΏ Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
π https://github.com/kubernetes/minikube/issues/new/choose
β Problems detected in kube-addon-manager [b17d460ddbab]:
error: no objects passeINFO:d == Kuto apberneply
error: no objectNsF Op:a == Kubernetssed tes ado appdon ely
start minikube 3rd time
$ minikube start
π minikube v1.4.0 on Darwin 10.13.6
π‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
π Using the running virtualbox "minikube" VM ...
β Waiting for the host to be provisioned ...
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
π Relaunching Kubernetes using kubeadm ...
! still got stuck on Relaunching
I enable Kubernetes config in Docker Preferences setting, restart my Mac and switch the Kubernetes context to docker-for-desktop.
Oh, the kubectl version works this time, but with the context docker-for-desktop
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
start minikube 4th time (after restart system maybe)
$ minikube start
π minikube v1.4.0 on Darwin 10.13.6
π‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
π Starting existing virtualbox VM for "minikube" ...
β Waiting for the host to be provisioned ...
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
π Relaunching Kubernetes using kubeadm ...
β Waiting for: apiserver proxy etcd scheduler controller dns
π Done! kubectl is now configured to use "minikube"
Finally, it works with minikube context...
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Upvotes: 2
Reputation: 11
i checked the firewall port and it was closed, i opened it and it started working.
Upvotes: 0
Reputation: 5876
You have to run first
minikube start
on your terminal. This will do following things for you:
Restarting existing virtualbox VM for "minikube" ...
β Waiting for SSH access ...
πΆ "minikube" IP address is 192.168.99.100
π³ Configuring Docker as the container runtime ...
π³ Version of container runtime is 18.06.3-ce
β Waiting for image downloads to complete ...
β¨ Preparing Kubernetes environment ...
π Pulling images required by Kubernetes v1.14.1 ...
π Relaunching Kubernetes v1.14.1 using kubeadm ...
β Waiting for pods: apiserver proxy etcd scheduler controller dns
π― Updating kube-proxy configuration ...
π€ Verifying component health ......
π kubectl is now configured to use "minikube"
π Done! Thank you for using minikube!
Upvotes: 43
Reputation: 704
If you use minikube then you should run, kubectl config use-context minikube
If you use latest docker for desktop that comes with kubernetes then you should run, kubectl config use-context docker-for-desktop
Upvotes: 17