NIK
NIK

Reputation: 1199

Kubectl how to work with different clusters (contexts) at the same time

In this case, I have multiple Kubernetes clusters and want to work on different clusters at the same time. (Will keep it as 2 clusters to make it simple)

As described in Kubernetes documentation I have configured two clusters (will call them dc1-main and dc2-main)

I'm logging into a node where kubectl is, with an application support user (e.g. appuser)

At the same time on two sessions to the management server I logged in with appuser.

In this case, I want to use kubectl to manage one context on each session.

But, if I set the active context as below, both sessions to the server reflect get the change as both are referring to the the same config file (which has both contexts)

kubectl config use-context dc1-main

Or the other option in the document is to pass the context with the command as an argument. Which makes the command quite complicated.

kubectl --context="dc2-main" get nodes

I'm looking at an easy way to change this quickly to change the context without affecting the other session. Which could be most likely an environment variable. Not so sure if this is the easiest though.

I went through the kubectl project GitHub and found a change has been requested long time ago for something similar to this and talking about env variables.

Any better suggestions?

Upvotes: 6

Views: 6740

Answers (4)

RammusXu
RammusXu

Reputation: 1260

David Maze's answer is best practice base on kubectl

I'll recommend you to use:

In TUI, you can use this in separated console session, it will cache kube configs

k9s --context eks-1
k9s --context eks-2

Upvotes: 2

Victor  EStalin
Victor EStalin

Reputation: 191

Not the finest solution but one of the fastest for me was to create VM or remote instance (like free m2.micro AWS) and act on it via SSH\VM UI having multiple configurations.

However you can take a look to this THIS INSTRUCTION, where you can export various kubeconfigs in different shell sessions.

Upvotes: 1

David Maze
David Maze

Reputation: 158995

The standard Kubernetes client libraries support a $KUBECONFIG environment variable. This means that pretty much every tool supports it, including Helm and any locally-built tools you have. You can set this to a path to a cluster-specific configuration. Since it's an environment variable, each shell will have its own copy of it.

export KUBECONFIG="$HOME/.kube/dc1-main.config"
kubectl get nodes

In your shell dotfiles, you can write a simple shell function to set this

kubecfg() {
  export KUBECONFIG="$HOME/.kube/$1.config"
}

In my use I only have one context (user/host/credentials) in each kubeconfig file, so I pretty much never use the kubectl config family of commands. This does mean that, however you set up the kubeconfig file initially, you either need to repeat those steps for each cluster or split out your existing kubeconfig file by hand (it's YAML so it's fairly doable).

# specifically for Amazon Elastic Kubernetes Service
kubecfg dc1-main
aws eks update-kubeconfig --name dc1-main ...
kubecfg dc2-main
aws eks update-kubeconfig --name dc2-main ...

Tools that want to write the configuration also use this variable, which for me mostly comes up if I want to recreate my minikube environment. You may find it useful to chmod 0400 "$KUBECONFIG" to protect these files once you've created them.

Upvotes: 9

NIK
NIK

Reputation: 1199

Thought of adding my quick workaround as an answer.

Creating two aliases as below resolved the problem partially.

alias kdc1='kubectl --context="dc1-main"'
alias kdc2='kubectl --context="dc2-main"'

Which gives me a quick two commands to access two contexts.

But the problem is if I use commands like helm.

Upvotes: 1

Related Questions