LightSith
LightSith

Reputation: 927

Cant connect to GKE cluster with kubectl. getting timeout

I executed followign command

gcloud container clusters get-credentials my-noice-cluter --region=asia-south2

and that command runs successfully. I can see the relevant config with kubectl config view

But when I try to kubectl, I get timeout

kubectl config view

❯ kubectl get pods -A -o wide
Unable to connect to the server: dial tcp <some noice ip>:443: i/o timeout

If I create a VM in gcp and use kubectl there or use gcp's cloud shell, It works but it does not work on our local laptops and PCs.

Some network info about our cluster:-

Private cluster     Disabled    
Network     default 
Subnet  default 
VPC-native traffic routing  Enabled     
Pod address range   10.122.128.0/17     
Service address range   10.123.0.0/22   
Intranode visibility    Enabled     
NodeLocal DNSCache  Enabled     
HTTP Load Balancing     Enabled     
Subsetting for L4 Internal Load Balancers   Disabled    
Control plane authorized networks   
office (192.169.1.0/24)
    
Network policy  Disabled    
Dataplane V2    Disabled

I also have firewall riles to allow http/s

❯ gcloud compute firewall-rules list
NAME                                       NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
default-allow-http                         default  INGRESS    1000      tcp:80                              False
default-allow-https                        default  INGRESS    1000      tcp:443                             False
....

Upvotes: 3

Views: 6071

Answers (3)

MLaurenzo
MLaurenzo

Reputation: 1

In my case, I was on GKE, and I used a private cluster with a public endpoint. I needed to add my IP on the authorized networks : https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks#add

Upvotes: 0

guillaume blaquiere
guillaume blaquiere

Reputation: 75990

If it's work from your VPC and not from outside, it's because you created a private GKE cluster. The master is only reachable through the private IP or through the autorized network.

Speaking about the authorized network, you have one authorizer office (192.169.1.0/24). Sadly, you registered a private IP range in your office network and not the public IP used to access the internet.

To solve that, go to a site that provide you your public IP. Then update the authorized network for your cluster with that IP/32, and try again.

Upvotes: 2

Rafał Leszko
Rafał Leszko

Reputation: 5541

If it works from the GCP VM, but does not work from your local that means that it's either related to the GCP Firewall or your GKE does not have a public IP.

First check if you cluster IP is public and if yes, then you need to add a firewall rule which allows the traffic over HTTPS (443 port). You can do it in the the gcloud tool or via the GCP Console "Firewall -> Create Firewall Rule".

Upvotes: 0

Related Questions