codependent
codependent

Reputation: 24472

Permission error using kubectl after GKE Kubernetes cluster migration from one organization to another

I had a GKE cluster my-cluster created in a project that belonged to organization org1.

When the cluster was created I logged in with [email protected] using gcloud auth login and configured the local kubeconfig using gcloud container clusters get-credentials my-cluster --region europe-west4 --project project.

Recently we had to migrate this project (with the GKE cluster) to another organization, org2. We did it sucessfully following the documentation.

The IAM owner in org2 is [email protected]. In order to reconfigure the kube config I followed the previous steps, logging in in this case with [email protected]:

gcloud auth login

gcloud container clusters get-credentials my-cluster --region europe-west4 --project project.

When I execute kubectl get pods I get an error referencing the old org1 user:

Error from server (Forbidden): pods is forbidden: User "[email protected]" cannot list resource "pods" in API group "" in the namespace "default": requires one of ["container.pods.list"] permission(s).

What's the problem here?

Upvotes: 0

Views: 1044

Answers (2)

codependent
codependent

Reputation: 24472

I've accepted DazWilking's answer since in a way he was right, the config file was "inconsistent".

The problematic bit was in the user section:

  user:
    auth-provider:
      config:
        access-token: xxxxxx      
        expiry: "2021-07-11T18:36:42Z"

For some reason when using the gcloud container clusters get-credential command it created all items (cluster, context and user) with an invalid user section.

To fix it I connected to a cloud shell directly from the Google Cloud web console and checked the ./kube/config file there. My local config was missing the cmd-path, cmd-args, expiry-key and token-key entries:

  user:
    auth-provider:
      config:
        access-token: xxx
        cmd-args: config config-helper --format=json
        cmd-path: /Users/xxx/applications/google-cloud-sdk/bin/gcloud
        expiry: "2021-07-11T18:36:42Z"
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'

Upvotes: 1

DazWilkin
DazWilkin

Reputation: 40326

This may not be the answer but hopefully it's part of the answer.

gcloud container clusters get-credentials is a convenience function that mutates the local ${KUBECONFIG} (often ~/.kube/config) and populates it with cluster, context and user properties.

I suspect (!?), your KUBECONFIG has become inconsistent.

You should be able to edit it directly to better understand what's happening.

There are 3 primary blocks clusters, contexts and users. You're looking to find entries (one each cluster, context, user) for your old GKE cluster and for your new GKE cluster.

Don't delete anything

Either back the file up first, or rename the entries.

Each section will have a name property that reflects the GKE cluster name gke_${PROJECT}_${LOCATION}_${CLUSTER}

It may be simply that the current-context is incorrect.

NOTE Even though gcloud creates user entries for each cluster, these are usually identical (per user) and so you can simplify this section.

NOTE If you always use gcloud, it does a decent job of tidying up (removing entries) too.

Upvotes: 0

Related Questions