kosta
kosta

Reputation: 4750

kubectl command timeout on Google Kubernetes Engine

I setup a cluster from the kubernetes dashboard with 2 nodes in the private network.

I have exposed the port 80 that maps to port 8545 from the container instances. So, when I access the external IP I can view my application.

I set up a kubectl on my machine and ran the following command

gcloud container clusters get-credentials <cluster name> --zone <my-zone> --project <project name>

However, when I run this I get an error

$ kubectl get deployments
Unable to connect to the server: dial tcp 35.194.113.118:443: i/o timeout

On the GCP dashboard, I see the following for the cluster

Endpoint    
35.194.113.118

It also has a view credentials option next to it which has a certificate file and username and password

So, I tried setting it

kubectl config set-credentials cluster-admin --username=admin --password=<my password>

I tried kubectl command again, however I get the same timeout error. Can someone help to fix this?

Upvotes: 4

Views: 8704

Answers (2)

Nechita Radu
Nechita Radu

Reputation: 1

Although the OP solved the issue, I will try to clarify it for others.

The issue comes from the fact that the cluster is private cluster and therefore is not accessible from any IP. The Google Cloud Shell (or any shell from a cloud provider) is not in the same IP range as the cluster. We have to allow the cluster to accept requests coming from a trusted source (our ip range).

To be able to properly connect to the kubernetes cluster we have to do the following:

This should solve the issue of timeout on various kubectl commands.

PS: For step 3, to determine the existing auth_nets the following command is suggested:

gcloud container clusters describe private-cluster-1 --region europe-west1 --format "flattened(masterAuthorizedNetworksConfig.cidrBlocks[])"

I found it helpful to use:

gcloud container clusters describe private-cluster-1 --region europe-west1 | grep master

Upvotes: 0

kosta
kosta

Reputation: 4750

I added an authorized network by editing the cluster and selecting Add Authorized network. This seems to have solved the problem.

https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters

Upvotes: 8

Related Questions