Reputation: 3
I would like to set a coreOS cluster on VirtualBox. I have read the coreOS documentation in the oficial site and it is said I have to boot each virtual machine with the same configuration and they should be automatically clustered. I am using the ct command in order to translate the Container Linux Configuration into the coreOS ignition file.
ct --platform=vagrant-virtualbox < containerLinuxConfig > ignition.json
This is my container Linux Config file
etcd:
name: "{HOSTNAME}"
listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
listen_client_urls: "http://0.0.0.0:2379"
initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
# replace "<token>" with a valid etcd discovery token
discovery: "https://discovery.etcd.io/b89df44ae2643afed5d3f05ea774ba6b"
systemd:
units:
- name: docker-tcp.socket
enable: true
contents: |
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
Service=docker.service
BindIPv6Only=both
[Install]
WantedBy=sockets.target
- name: flanneld.service
dropins:
- name: 50-network-config.conf
contents: |
[Service]
ExecStartPre=/usr/bin/etcdctl set /flannel/network/config '{ "Network": "10.2.0.0/16", "Backend":{"Type":"vxlan"} }'
flannel:
etcd_prefix: "/flannel/network"
passwd:
users:
- name: core-01
password_hash: $1$B61gfKDk$ALsU28o4XGSro4Uqd00FW/
groups:
- sudo
- docker
But when I boot the first virtual machine, I use the
etcdctl member list
command in order to check if the first member of the cluster is up, I get this output.
Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:2379: connect: connection refused
; error #1: dial tcp 127.0.0.1:4001: connect: connection refused
error #0: dial tcp 127.0.0.1:2379: connect: connection refused
error #1: dial tcp 127.0.0.1:4001: connect: connection refused
When the output should be similar to
e601a65b304e868f: name=core-01 peerURLs=http://192.168.1.30:2380 clientURLs=http://192.168.1.30:2379 isLeader=true
Why is this happening? What I should change in the container linux configuration to get the machine clustered?
Upvotes: 0
Views: 187
Reputation: 31
looks to me like the etcd is taking default parameters. 127.0.0.1:2379
did you tried to specify the ${HOSTNAME}
and ${PRIVATE_IPV4}
and consider this as well:
-–initial-cluster-state
Initial cluster state (“new” or “existing”). Set to new for all members present during initial static or DNS bootstrapping. If this option is set to existing, etcd will attempt to join the existing cluster. If the wrong value is set, etcd will attempt to start but fail safely. default: “new” env variable: ETCD_INITIAL_CLUSTER_STATE
Refer to the bellow configuration. This will help you spin up a single etcd instance. you can skip the ssl certificates if you want.
etcd:
version: 3.2.17
name: core-01
data_dir: /var/lib/etcd
listen_client_urls: https://10.0.2.11:2379,https://127.0.0.1:2379,https://127.0.0.1:4001
advertise_client_urls: https://10.0.2.11:2379
listen_peer_urls: https://10.0.2.11:2380
initial_advertise_peer_urls: https://10.0.2.11:2380
initial_cluster: core-01=https://10.0.2.11:2380
initial_cluster_token: etcd-token
initial_cluster_state: new
cert_file: /var/lib/etcd/ssl/apiserver-etcd-client.pem
key_file: /var/lib/etcd/ssl/apiserver-etcd-client-key.pem
peer_cert_file: /var/lib/etcd/ssl/apiserver-etcd-client.pem
peer_key_file: /var/lib/etcd/ssl/apiserver-etcd-client-key.pem
client_cert_auth: true
peer_client_cert_auth: true
trusted_ca_file: /etc/ssl/certs/ca.pem
peer_trusted_ca_file: /etc/ssl/certs/ca.pem
auto_compaction_retention: 1
if you want to add more, just add the other node's IP address.
...
initial_cluster: coreos1=https://10.0.0.4:2380,coreos2=https://10.0.0.5:2380,coreos3=https://10.0.0.6:2380
...
Upvotes: 0