ivica
ivica

Reputation: 1426

install Nagios with Ansible, host variables

I'm learning Ansible and I would like to install Nagios server with several monitored nodes. Nagios install steps that I'm following are from this tutorial on Digitalocean.

Step 5 of this tutorial confuses me as this is my first time using Ansible. This step involves a configuration file for monitored node on the master server, which I achieved using templates like this

- name: Configure Nagios server
  hosts: master
  sudo: true
  vars:
      nagios_slaves_config_dir: /etc/nagios/servers
      nagios_config_file: /etc/nagios/nagios.cfg
  tasks:
      # shortened for brevity
    - name: copy slaves config
      template: src=../templates/guest.cfg.j2 dest=/etc/nagios/servers/{{ item }}.cfg owner=root mode=0644
      with_items: groups['slaves']

Template looks like this

define host {
        use                     linux-server
        host_name               {{ inventory_hostname }}
        alias                   {{ inventory_hostname }}
        address                 {{ hostvars['slave'].ansible_eth1.ipv4.address }}
        }

define service {
        use                             generic-service
        host_name                       {{ inventory_hostname }}
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
        }

This configuration file gets created but {{ inventory_hostname }} variable is wrong - instead of node_1 it states master

How can I template the configuration file for every monitored node so that it is created with the proper values?

:EDIT:

One idea is to generate config files on monitored nodes and copy them to master node. Will try tomorrow.

Upvotes: 3

Views: 2797

Answers (2)

ydaetskcoR
ydaetskcoR

Reputation: 56997

Your play is specifically only targeting your master server:

- name: Configure Nagios server
  hosts: master
  ...

so the task will only run against this node (or multiple nodes in an inventory group called master).

You then seem to have got in a bit of a muddle with how you get the variables from the other servers that you wish to monitor (everything in the slaves inventory group in your case).

inventory_hostname is going to do pretty much what it says on the tin - it's going to give you the hostname of the server that the task is running against. Which in this case is only ever going to be master.

You are, however, on the right track with this line:

        address                 {{ hostvars['slave'].ansible_eth1.ipv4.address }}

but you should have instead used the item that is being passed to the template in the task loop (you use with_items: groups['slaves'] to loop through all the hosts in slaves).

So your template wants to look something like:

define host {
        use                     linux-server
        host_name               {{ hostvars[item].ansible_hostname }}
        alias                   {{ hostvars[item].ansible_hostname }}
        address                 {{ hostvars[item].ansible_eth0.ipv4.address }}
        }

define service {
        use                             generic-service
        host_name                       {{ hostvars[item].ansible_hostname }}
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
        }

This will generate a Nagios config file on the master named the same as the entry in the inventory file under the slaves group (this could be anything but by default would be an IP address, short or fully qualified domain name) for each server in the slaves group with the expected values templated in.

Alternatively you might want to rethink your whole strategy so that running a task against a monitored node creates the config file on the Nagios server allowing you to register servers to be monitored with a central Nagios server.

Upvotes: 3

Dan
Dan

Reputation: 1986

It's unclear from your explanation where you expect Ansible to get the node_1 value from. If this is not the hostname, where else is the information stored? If it's stored in variable, you could access it that way, but from the looks of it, you are using your inventory in a backwards fashion. You should not be using internal implementation details of the system as an inventory name. How are you even able to connect to master, via an entry in /etc/hosts?

Instead of defining your host's name as master, I would could create a variable to track and specify whether the host is a master or slave, for instance, using something like cluster_type: master or cluster_type: slave. These variables could be applied as host variables or group variables (which is probably what you want if you have multiple slaves). The host name in your inventory should ideally be something that you can actually connect to and reference.

Upvotes: 1

Related Questions