Erik
Erik

Reputation: 898

Attempting to share cluster access results in creating new cluster

I have deleted the config file I used when experimenting with Kubernetes on my AWS (using this tutorial) and replaced it with another devs config file when they set up Kubernetes on a shared AWS (using this). When I run kubectl config view I see the following above the users section:

- cluster:
    certificate-authority-data: REDACTED
    server: <removed>
  name: aws_kubernetes
contexts:
- context:
    cluster: aws_kubernetes
    user: aws_kubernetes
  name: aws_kubernetes
current-context: aws_kubernetes

This leads me to believe that my config should be pointing to use our shared AWS but whenever I run cluster/kube-up.sh it creates a new GCE cluster so I'm thinking I'm using the wrong command to spin up the cluster on AWS.

Am I using the wrong command/missing a flag/etc? Additionally I'm thinking kube-up creates a new cluster instead of recreating a previously instantiated one.

Upvotes: 0

Views: 55

Answers (1)

Robert Bailey
Robert Bailey

Reputation: 18230

If you are sharing the cluster, you shouldn't need to run kube-up.sh (that script only needs to be run once to initially create a cluster). Once a cluster exists, you can use the standard kubectl commands to interact with it. Try starting with kubectl get nodes to verify that your configuration file has valid credentials and you see the expected AWS nodes printed in the output.

Upvotes: 0

Related Questions