rf guy
rf guy

Reputation: 439

Correctly using multiple clusters in a kubectl config file

I am tyring to switch between a couple different clusters I have defined in a kubectl config file but it seems no matter what I do it defaults to the defined dev cluster. Any help is greatly appreciated.

Here is the config file named combined:

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://zlty10129.vci.att.com:6443
  name: moks-dev-cluster
- cluster:
    insecure-skip-tls-verify: true
    server: https://zlp30531.vci.att.com:6443
  name: moks-prod-cluster
contexts:
- context:
    cluster: moks-dev-cluster
    namespace: com-att-moks-dev
    user: default-user
  name: dev
- context:
    cluster: moks-prod-cluster
    namespace: com-att-moks-prod
    user: default-user
  name: prod
current-context: prod
kind: Config
preferences: {}
users:
- name: default-user
  user:
    token: [email protected]:enc:pass

And when I try to switch to prod I still get the dev context:

kubectl config --kubeconfig=combined use-context prod

kubectl config view

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://zlty10129.vci.att.com:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: com-att-moks-dev
    user: default-user
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-user
  user:
    token: [email protected]:enc:pass

I am not sure what i am doing wrong.

Thanks!

Upvotes: 0

Views: 435

Answers (1)

rf guy
rf guy

Reputation: 439

I figured it out. I just renamed the combined file to config and it works.

Upvotes: 1

Related Questions