Olivier
Olivier

Reputation: 2162

kubectl cannot connect GKE, failing with x509: certificate signed by unknown authority

I can't connect from my machine to any GKE cluster. From remote machine, it's working, but not from mine and I can't figure out why. If some of you have any idea...

I have installed kubectl from within gcloud (gcloud components install kubectl)

I'm running gcloud init, then on existing cluster or on newly created one with gcloud container clusters create my-cluster --preemptible --cluster-version 1.12.7-gke.10 --machine-type n1-standard-1 --disk-size 20 --num-nodes 1

I'm retrieving my credentials with gcloud container clusters get-credentials my-cluster --zone europe-west1-b --project my-project-123456 which creates a new context for my kubectl. Switching to it (with kubectx).

But when I'm trying to contact my cluster (e.g. kubectl get pods) it fails with with the following message:

Unable to connect to the server: x509: certificate signed by unknown authority

I just can't figure out why my local kubectl can't validate Google CA. I followed all resources I found, tried with other clusters, in other zone/region, with a different version of python (2.7 & 3.6), re-init gcloud, used another Google account, another version of kubectl (1.11, 1.12 & 1.14), update my CA (sudo update-ca-certificates) in Linux (Mint 19.1 Tessa).

Has anyone already face this and found a solution?

Upvotes: 2

Views: 10797

Answers (4)

Olivier
Olivier

Reputation: 2162

For you all to know, the issue on my side (and the reason why I had a chaotic result with connection working some times and not some other times) is that on my professional network I have a MITM proxy, which substitute Google certificate with my company's certificate.

So... the certificate is rejected by kubectl... Pretty normal.

Upvotes: 2

Dagm Fekadu
Dagm Fekadu

Reputation: 610

This issues happened to me because I had a wrong certificate wrong. May happen for related reasons where terraform can't connect with your remote cluster. In order to use the default kubeconfig credentials already setup, you can just leave your provider empty.

provider "kubernetes" {
}

Upvotes: 0

Robert Newton
Robert Newton

Reputation: 206

If anyone runs into this after they are able to log in, this solution worked for me

gcloud container clusters get-credentials YOURCLUSTERHERE --zone YOURCLUSTERZONEHERE

After you fill in your info and run it, you should be able to move forward.

Upvotes: 10

Crou
Crou

Reputation: 11408

Easy way would be running gcloud auth login which as documentation says:

gcloud auth login - authorize gcloud to access the Cloud Platform with Google user credentials

Obtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the current configuration to the account specified. If no configuration exists, it creates a configuration named default. Use gcloud auth list to view credentialed accounts.

This will ask you to login using your account to Google Cloud SDK and will ask to allow access for:

  • View and manage your data across Google Cloud Platform services

  • View and manage your Google Compute Engine resources>

  • View and manage your applications deployed on Google App Engine

The process on how to Install SDK with apt-get on Debian and Ubuntu, and over here Installing with yum Red Hat and CentOS.

Upvotes: 3

Related Questions