oBa
oBa

Reputation: 411

CoreOS AWS Cloudinit Issue

I'm trying to setup EC2 instances using CoreOS stable AMI with some custom cloud-init config but having some issues.

#cloud-config
coreos:
  etcd:
    discovery: https://discovery.etcd.io/5996f1b49fd642c5d1bc2f62cbff2fba
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
write_files:
  - path: /etc/fleet/fleet.conf
    content: |
      public_ip="$private_ipv4"
      metadata="elastic_ip=true,public_ip=$public_ipv4"

The cloud-config above works fine but once I use below cloud-config

#cloud-config
coreos:
  etcd:
    discovery: https://discovery.etcd.io/5996f1b49fd642c5d1bc2f62cbff2fba
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
users:
  - name: core
    coreos-ssh-import-github: oba11
write_files:
  - path: /etc/fleet/fleet.conf
    content: |
      public_ip="$private_ipv4"
      metadata="elastic_ip=true,public_ip=$public_ipv4"

or

#cloud-config
coreos:
  etcd:
    discovery: https://discovery.etcd.io/5996f1b49fd642c5d1bc2f62cbff2fba
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
users:
  - name: oba11
    groups:
      - sudo
      - docker
    coreos-ssh-import-github: oba11
write_files:
  - path: /etc/fleet/fleet.conf
    content: |
      public_ip="$private_ipv4"
      metadata="elastic_ip=true,public_ip=$public_ipv4"

I can't SSH to the coreos instances again either as 'core' user with my aws keypair or personal key and created user 'oba11' with my personal key. I also tried the alpha AMI but the same issue. I don't know if I'm doing something wrong.

Thanks alot for the help.

Upvotes: 4

Views: 920

Answers (2)

Panagiotis Moustafellos
Panagiotis Moustafellos

Reputation: 1013

You are using a stale etcd discovery token id.

As soon as your cluster nodes have used this id the token is marked as used, if for whatever reason no etcd nodes heartbeat to this address the token is rendered useless.

Should you try to launch a new cluster or single node with the same etcd discovery URI the bootstrap process will fail.

In your case EC2 nodes will come up with the ssh service on, but they will not be properly configured with that cloud-config.

The behavior you are experiencing (connecting but rejecting your PK) is expected, and can cause headaches if you haven't read the documentation at https://coreos.com/docs/cluster-management/setup/cluster-discovery/ where it is stated that;

Another common problem with cluster discovery is attempting to boot a new cluster with a stale discovery URL. As explained above, the initial leader election is recorded into the URL, which indicates that the new etcd instance should be joining an existing cluster.

If you provide a stale discovery URL, the new machines will attempt to connect to each of the old peer addresses, which will fail since they don't exist, and the bootstrapping process will fail.

Upvotes: 2

ART GALLERY
ART GALLERY

Reputation: 540

I have successfully booted 3 CoreOS machine cluster, and ssh'd in without any problems. Using your config. Check your security groups, maybe that is the problem. I was using this AMI ami-00158768

#cloud-config
coreos:
  etcd:
    discovery: https://discovery.etcd.io/b0ac83415ff737c16670ce015a5d4eeb
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
users:
  - name: gxela
    groups:
      - sudo
      - docker
    coreos-ssh-import-github: gxela
write_files:
  - path: /etc/fleet/fleet.conf
    content: |
      public_ip="$private_ipv4"
      metadata="elastic_ip=true,public_ip=$public_ipv4"

Upvotes: 0

Related Questions