Daniel Dror
Daniel Dror

Reputation: 2507

running logstash as a dameon inside a docker container

To be fair, all I wanted to do is have metricbeat send sys stats to elasticsearch and view them on kibana.

I read through elasticsearch docs, trying to find clues. I am basing my image on python since my actual app is written in python, and my eventual goal is to send all logs (sys stats via metricbeat, and app logs via filebeat) to elastic.

I can't seem to find a way to run logstash as a service inside of a container.

my dockerfile:

FROM python:2.7

WORKDIR /var/local/myapp
COPY . /var/local/myapp

# logstash
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN apt-get update && apt-get install apt-transport-https dnsutils default-jre apt-utils -y
RUN echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-5.x.list
RUN apt-get update && apt-get install logstash

# metricbeat
#RUN wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.6.0-amd64.deb
RUN dpkg -i metricbeat-5.6.0-amd64.deb

RUN pip install --no-cache-dir -r requirements.txt

RUN apt-get autoremove -y

CMD bash strap_and_run.sh

and the extra script strap_and_run.sh:

python finalize_config.py

# start
echo "starting logstash..."
systemctl start logstash.service

#todo :get my_ip
echo "starting metric beat..."
/etc/init.d/metricbeat start

finalize_config.py

import os

import requests

LOGSTASH_PIPELINE_FILE = 'logstash_pipeline.conf'
LOGSTASH_TARGET_PATH = '/etc/logstach/conf.d'

METRICBEAT_FILE = 'metricbeat.yml'
METRICBEAT_TARGET_PATH = os.path.join(os.getcwd, '/metricbeat-5.6.0-amd64.deb')

my_ip = requests.get("https://api.ipify.org/").content

ELASTIC_HOST = os.environ.get('ELASTIC_HOST')
ELASTIC_USER = os.environ.get('ELASTIC_USER')
ELASTIC_PASSWORD = os.environ.get('ELASTIC_PASSWORD')

if not os.path.exists(os.path.join(LOGSTASH_TARGET_PATH)):
    os.makedirs(os.path.join(LOGSTASH_TARGET_PATH))

# read logstash template file
with open(LOGSTASH_PIPELINE_FILE, 'r') as logstash_f:
    lines = logstash_f.readlines()
    new_lines = []
    for line in lines:
        new_lines.append(line
                         .replace("<elastic_host>", ELASTIC_HOST)
                         .replace("<elastic_user>", ELASTIC_USER)
                         .replace("<elastic_password>", ELASTIC_PASSWORD))

# write current file
with open(os.path.join(LOGSTASH_TARGET_PATH, LOGSTASH_PIPELINE_FILE), 'w+') as new_logstash_f:
    new_logstash_f.writelines(new_lines)

if not os.path.exists(os.path.join(METRICBEAT_TARGET_PATH)):
    os.makedirs(os.path.join(METRICBEAT_TARGET_PATH))


# read metricbeath template file
with open(METRICBEAT_FILE, 'r') as metric_f:
    lines = metric_f.readlines()

    new_lines = []
    for line in lines:
        new_lines.append(line
                         .replace("<ip-field>", my_ip)
                         .replace("<type-field>", "test"))

# write current file
with open(os.path.join(METRICBEAT_TARGET_PATH, METRICBEAT_FILE), 'w+') as new_metric_f:
    new_metric_f.writelines(new_lines)

Upvotes: 0

Views: 1236

Answers (1)

Tarun Lalwani
Tarun Lalwani

Reputation: 146510

The reason is there is no init system inside the container. So you should not use service or systemctl. So you should yourself start the process in background. Your updated script would look like below

python finalize_config.py

# start
echo "starting logstash..."
/usr/bin/logstash &

#todo :get my_ip
echo "starting metric beat..."
/usr/bin/metric start & 

wait 

You will also need to add handling for TERM and other signal, and kill the child processes. If you don't do that docker stop will have few issues.

I prefer in such situation using a process manager like supervisord and run supervisor as the main PID 1.

Upvotes: 1

Related Questions