Zwadderich
Zwadderich

Reputation: 251

cloud-config not being read

I am setting up a simple cluster with coreos using vagrant and I have a feeling my cloud-init file is not being read. This because the services i want it to start aren't started when i ssh in the machines.

Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.

require 'yaml'
require 'fileutils'

# Look for user-data file to configure/customize CoreOS boxes
# # Be sure to edit user-data file to provide etcd discovery URL
USER_DATA = File.join(File.dirname(__FILE__), "user-data")

servers = YAML.load_file('servers.yml')

Vagrant.configure(2) do |config|
  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://atlas.hashicorp.com/search.
  config.ssh.insert_key = false
#  config.ssh.private_key_path = "~/.ssh/id_rsa"
  config.ssh.forward_agent = true
#  config.ssh.password = "vagrant"

  servers.each do |servers|
          config.vm.define servers["name"] do |srv|
                  srv.vm.box_check_update = false
                  srv.vm.hostname = servers["name"]
                  srv.vm.box = servers["box"]

                  srv.vm.network "private_network", ip: servers["priv_ip"]
                  srv.vm.network "public_network", bridge: "vlan0", ip: servers["pub_ip"]

                   if srv.vm.box == "coreos-stable"
                                 srv.vm.provision :file, :source => "#{USER_DATA}", :destination => "/tmp/vagrantfile-user-data"
                                 srv.vm.provision :shell, :inline => "mv /tmp/vagrantfile-user-data /var/lib/coreos-vagrant/", :privileged => true
                                srv.vm.synced_folder ".", "/vagrant", disabled: true
                   end 
                   
end
  end
end

cloud-config.yml

coreos:
  etcd:
    # iedere unieke cluster heeft een nieuwe token nodig. Makkelijk te verkrijgen via https://discovery.etcd.io/new
    discovery: https://discovery.etcd.io/054daef60b25b0384350be326fb40bf1
    addr: $public_ipv4:4001
    peer-addr: $public_ipv4:7001
  fleet:
    public-ip: $public_ipv4
  units:
    - name: "etcd.service"
      command: "start"
    - name: "fleet.service"
      command: "start"
    - name: docker-tcp.socket
      command: start
      enable: true
      content: |
        [Unit]
        Description=Docker Socket voor de API

        [Socket]
        ListenStream=2375
        BindIPv6Only=both
        Service=docker.service

        [Install]
        WantedBy=sockets.target

servers.yml:

- name: arya
  box: yungsang/coreos
  ram: 512
  vcpu: 1
  priv_ip: 192.168.254.101
  pub_ip: 172.16.5.11
- name: sansa
  box: yungsang/coreos
  ram: 512
  vcpu: 1
  priv_ip: 192.168.254.102
  pub_ip: 172.16.5.12
- name: rickon
  box: yungsang/coreos
  ram: 512
  vcpu: 1
  priv_ip: 192.168.254.103
  pub_ip: 172.16.5.13

When I ssh into one of the machines and try systemctl status etcd and/or fleet they are inactive. I am still a beginner, am I doing something wrong? Thanks in advance

Upvotes: 1

Views: 565

Answers (1)

Rob
Rob

Reputation: 2456

Your cloud-config needs to start with the line #cloud-config. Additionally, you can run you files through https://coreos.com/validate/

Upvotes: 1

Related Questions