lzecca
lzecca

Reputation: 165

kubectl "error: You must be logged in to the server (Unauthorized)" using kubectx, while no error if used the same config directly

I am encountering a weird behavior when i try to configure several KUBECONFIG environment entries concatenated with : such in the example here :

export KUBECONFIG=/Users/user/Work/company/project/setup/secrets/dev-qz/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev-wer/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-wer/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev-jg/users/admin.conf:/Users/user/Work/company/project/setup/secrets/preprod/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev-fxc/users/admin.conf:/Users/user/Work/company/project/setup/secrets/cluster-setup/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-fxc/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-jg/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-qz/users/admin.conf

This is what is happening: if i choose with kubectx the cluster (not every cluster from the list, but just any), when i try kubectl get po i receive : error: You must be logged in to the server (Unauthorized) . But, if try to reach the same cluster passing it directly to the kubectl command with --kubeconfig=<path to the config> it works. I am pretty struggling with this and just wanna know if anyone else is facing this kind of issues as well and how have solved it

Upvotes: 2

Views: 4474

Answers (2)

lzecca
lzecca

Reputation: 165

Eventually i found the problem. The flatten command that suggested to me @mario, helped me to debug better the situation.

Basically, the in memory or in file merge, makes what it supposed to do: create a kubeconfig with all uniq parameters of each kubeconfig files. This works perfectly unless on or more kubeconfig has the same labels that identify the same component. In this case the last in order wins. So if you have the following example:

grep -Rn 'name: kubernetes-admin$' infra/secrets/*/users/admin.conf
infra/secrets/cluster1/users/admin.conf:16:- name: kubernetes-admin
infra/secrets/cluster2/users/admin.conf:17:- name: kubernetes-admin
infra/secrets/cluster3/users/admin.conf:16:- name: kubernetes-admin

cluster1 and cluster2 won't work, while cluster3 will work perfectly, incidentally due to the order. The solution to this problem is to avoid non uniq fields, by renaming the label that identifies the user (for the example above). Once is done this change, everything will work perfectly.

Upvotes: 1

mario
mario

Reputation: 11098

I agree with @Bernard. This doesn't look like anything specific to kubectx as it is just a bash script, which under the hood uses kubectl binary. You can see its code here. I guess that it will also fail in kubectl if you don't provide the

But, if try to reach the same cluster passing it directly to the kubectl command with --kubeconfig= it works.

There is a bit of inconsistency in the way you're testing it as you don't provide the specific kubeconfig file to both commands. When you use kubectx it relies on your multiple in-memory merged kubeconfig files and you compare it with working kubectl example in which you directly specify the kubeconfig file that should be used. To make this comparison consistent you should also use kubectx with this particular kubeconfig file. And what happens if you run kubectl command without specifying --kubeconfig=<path to the config> ? I guess you get similar error to the one you get when running kubectx. Please correct me if I'm wrong.

There is a really good article written by Ahmet Alp Balkan - kubectx author, which nicely explains how you can work with multiple kubeconfig files. As you can read in the article:

Tip 2: Using multiple kubeconfigs at once

Sometimes you have a bunch of small kubeconfig files (e.g. one per cluster) but you want to use them all at once, with tools like kubectl or kubectx that work with multiple contexts at once.

To do that, you need a “merged” kubeconfig file. Tip #3 explains how you can merge the kubeconfigs into a single file, but you can also merge them in-memory.

By specifying multiple files in KUBECONFIG environment variable, you can temporarily stitch kubeconfig files together and use them all in kubectl.

export KUBECONFIG=file1:file2
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2

Tip 3: Merging kubeconfig files

Since kubeconfig files are structured YAML files, you can't just append them to get one big kubeconfig file, but kubectl can help you merge these files:

KUBECONFIG=file1:file2:file3 kubectl config view --merge --flatten > out.txt

Possible solutions:

  1. Try to merge your multiple kubeconfig files to a single one like in the example above to see if it's possibly problem only with in-memory merge:

    KUBECONFIG=file1:file2:file3 kubectl config view --merge --flatten > out.txt

  2. Review all your kubeconfigs and test them individually just to make sure if they're working properly when specified in KUBECONFIG env variable separately. There might be some error in one of them which causes the issue.

Upvotes: 0

Related Questions