Chris Stryczynski
Chris Stryczynski

Reputation: 34041

Nomad configuration for single node to act as production server and client

How can I setup Nomad to act the same as it's development mode, but instead to run this as a production setup so it persists data? So that is nomad agent -dev.

Do I run the client / server processes repeatedly? Or can I configure it to run both?

So essentially a one node nomad cluster

Upvotes: 4

Views: 8476

Answers (2)

ninjaintrouble
ninjaintrouble

Reputation: 444

Since the other answer is valid in its criticism but didn't bother answering the question, here is what you can do for linux:

This assumes you have nomad installed at /usr/local/bin/nomad

Nomad config

Create the following config.hcl inside /etc/nomad.d. Make sure to replace the value of name from example config.

client {
  enabled = true
}
server {
  enabled = true
  bootstrap_expect = 1
}
datacenter = "dc1"
data_dir = "/opt/nomad"
name =  "YOUR_NOMAD_NAME_HERE"

The data will be persisted in data_dir (/opt/nomad in this example config)

Linux service

Then create a service nomad.service inside /etc/systemd/system/:

[Unit]
Description=Nomad
Documentation=https://nomadproject.io/docs/
Wants=network-online.target
After=network-online.target

[Service]
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/local/bin/nomad agent -config /etc/nomad.d
KillMode=process
KillSignal=SIGINT
LimitNOFILE=infinity
LimitNPROC=infinity
Restart=on-failure
RestartSec=2
StartLimitBurst=3
TasksMax=infinity

[Install]
WantedBy=multi-user.target

And finally start it with systemctl enable nomad && systemctl start nomad

Upvotes: 25

Chris Zacharias
Chris Zacharias

Reputation: 608

Production Nomad does not really "persist" data in the expected sense. It shares data within a cluster through a consensus protocol. Each server keeps its own copy of the "state of the world" and then "gossips" with its peers to notice any changes that it needs to make. If there is some measure of confusion or a tie-break is required, a "leader" provides the answer. This pattern creates redundancy and resiliency in the event that a server in the cluster goes down. Consul is designed to work in an almost identical fashion as well.

The "dev" mode is essentially a one-server cluster that is also a client. You really do not want to do this in production for a number of reasons. Mainly, the server cluster is designed to oversee and manage the resources and allocations on its associated clients. Colocating them in production on the same machine could create all kinds of problems as you increase the number and resource requirements of your jobs. The last thing you want is your job competing for resources with the process overseeing it.

The recommended baseline production setup would be 3 Nomad servers and 2 Nomad clients, for a total of 5 instances. This gets you the bare-minimum amount of isolation and redundancy expected in a Nomad production deployment.

I would recommend picking the number of Nomad servers early (3 or 5 is recommended, odd numbers are required to properly elect a leader) and hardening the configuration so that servers can never enter and exit existence unexpectedly. Do not use auto-scaling or dynamic addressing schemes. Instead, lock down the assigned IP, host name, etc. for servers so that if they need a reboot or go offline for some reason, they come up exactly as they were before. Otherwise, you could risk corrupting the server consensus should one of the servers move around. For the Nomad clients, I typically use a manual-scaling group that allows me to scale up or down the number of Nomad clients. You could probably use auto-scaling if you can monitor the resources well enough to feed signals to the scaler. Some work is required to scale down properly (i.e. marking ineligible, waiting for drain), but scaling up is essentially just configuring the box and running the Nomad client.

Upvotes: 3

Related Questions