drora
drora

Reputation: 1

KOPS api ELB launches with master instances out-of-service

I have a weird thing going on with KOPS. after launching to a new VPC, I can't seem to get kubectl configured to the created cluster (no such host).

Looking into route53 record (api.dev.mydomain.co) I can see it pointing to an ELB appropriately. quick ELB inspection showed that all registered instances are out-of-service.

Can't SSH to master instances (no public ip), not even through a bastion (ssh with private-key asked for passphrase which I didn't know -> access denied).

I tried several different network overlays, as well as an older kops version, nothing worked.

Thoughts or ideas? Am I missing something or a misconfigured my AWS account? That did use to work before on a different account.

kops create cluster \
  --cloud aws \
  --node-count 2 \
  --master-count 1 \
  --zones us-east-1a,us-east-1b,us-east-1c,us-east-1d,us-east-1e,us-east-1f \
  --master-zones us-east-1a \
  --dns-zone mydomain.co \
  --node-size m4.large \
  --master-size m4.large \
  --topology private \
  --networking canal \
  --image kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2017-12-02 \
  --name dev.mydomain.co

Upvotes: 0

Views: 1538

Answers (1)

Alexey Dmitriev
Alexey Dmitriev

Reputation: 391

Master nodes could be out of service if they are not able to reach internet or resolve dns. I would check first if "DHCP Options Sets" and "Internet Gateway" are properly configured for VPC.

Regarding "passphrase" for private ssh key, it should be the same as you used when were generating the key with "ssh-keygen" command.

https://github.com/kubernetes/kops/blob/master/docs/security.md

SSH public key can be specified with the --ssh-public-key option, and it defaults to ~/.ssh/id_rsa.pub

Upvotes: 1

Related Questions