Reputation: 9429
I run multiple CoreOS instances on Google Compute Engine (GCE). CoreOS uses systemd's journal logging feature. How can I push all logs to a remote destination? As I understand, systemd journal doesn't come with remote logging abilities. My current work-around looks like this:
journalctl -o short -f | ncat <addr> <ip>
With https://logentries.com using their Token-based input via TCP:
journalctl -o short -f | awk '{ print "<token>", $0; fflush(); }' | ncat data.logentries.com 10000
Are there better ways?
EDIT: https://medium.com/coreos-linux-for-massive-server-deployments/defb984185c5
Upvotes: 10
Views: 14151
Reputation: 71
CoreOS uses systemd's journal logging feature. How can I push all logs to a remote destination?
This functionality is provided by systemd-journal-remote. It can stream journal logs to a central logging server using systemd-journal-upload. If the server or network in between is down, it will stream the logs as soon as the connection is available again.
It is actually quite easy to setup:
/etc/systemd/journal-upload.conf
:[Upload]
URL=http://your_central_logserver_ip:19532
And configure /etc/systemd/system/systemd-journal-upload.service
:
[Unit]
Description=Journal Remote Upload Service
Documentation=man:systemd-journal-upload(8)
Wants=network-online.target
After=network-online.target
[Service]
ExecStartPre=/bin/sleep 10
Restart=on-failure
DynamicUser=yes
ExecStart=/lib/systemd/systemd-journal-upload --save-state
LockPersonality=yes
MemoryDenyWriteExecute=yes
PrivateDevices=yes
ProtectProc=invisible
ProtectControlGroups=yes
ProtectHome=yes
ProtectHostname=yes
ProtectKernelLogs=yes
ProtectKernelModules=yes
ProtectKernelTunables=yes
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
RestrictNamespaces=yes
RestrictRealtime=yes
StateDirectory=systemd/journal-upload
SupplementaryGroups=systemd-journal
SystemCallArchitectures=native
User=systemd-journal-upload
WatchdogSec=3min
[Install]
WantedBy=multi-user.target
EOF
On the central logging create a directory for the clients logs:
mkdir -p /var/log/journal/remote/
chown systemd-journal-remote:systemd-journal-remote /var/log/journal/remote/
chmod 0750 /var/log/journal/remote/
And setup the config file /etc/systemd/system/systemd-journal-remote.service
:
[Unit]
Description=Journal Remote Sink Service
Documentation=man:systemd-journal-remote(8) man:journal-remote.conf(5)
Requires=systemd-journal-remote.socket
[Service]
ExecStart=/lib/systemd/systemd-journal-remote --listen-http=-3 --output=/var/log/journal/remote/all.journal
LockPersonality=yes
LogsDirectory=journal/remote
MemoryDenyWriteExecute=yes
NoNewPrivileges=yes
PrivateDevices=yes
PrivateNetwork=yes
PrivateTmp=yes
ProtectProc=invisible
ProtectClock=yes
ProtectControlGroups=yes
ProtectHome=yes
ProtectHostname=yes
ProtectKernelLogs=yes
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectSystem=strict
RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
SystemCallArchitectures=native
User=systemd-journal-remote
WatchdogSec=3min
# If there are many split up journal files we need a lot of fds to access them
# all in parallel.
LimitNOFILE=524288
[Install]
Also=systemd-journal-remote.socket
EOF
Restart the systemd services and try if it works with systemd-cat
on a client:
systemd-cat ls /
Running journalctl --file /var/log/journal/remote/all.journal -f
should show you the output of the command on the central logging server.
I use this for our servers and it works quite nicely. I followed this blogpost on how to setup a central logging server using systemd-journal-remote for a bit more detailed and comprehensive instructions.
Upvotes: 1
Reputation: 11
You can also use rsyslog-kafka
module inside Rsyslog
.
Rsyslog with moduels:
- imfile - input file
- omkafka - output to Kafka
Define json template and push them to Apache Kafka. When logs are in Kafka...
Upvotes: 1
Reputation: 255
A recent python package my be useful: journalpump
With support for Elastic Search, Kafka and logplex outputs.
Upvotes: 0
Reputation: 131
Kelsey Hightower's journal-2-logentries has worked pretty well for us: https://logentries.com/doc/coreos/
If you want to drop in and enable the units without Fleet:
#!/bin/bash
#
# Requires the Logentries Token as Parameter
if [ -z "$1" ]; then echo "You need to provide the Logentries Token!"; exit
0; fi
cat << "EOU1" > /etc/systemd/system/systemd-journal-gatewayd.socket
[Unit]
Description=Journal Gateway Service Socket
[Socket]
ListenStream=/run/journald.sock
Service=systemd-journal-gatewayd.service
[Install]
WantedBy=sockets.target
EOU1
cat << EOU2 > /etc/systemd/system/journal-2-logentries.service
[Unit]
Description=Forward Systemd Journal to logentries.com
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
Restart=on-failure
RestartSec=5
ExecStartPre=-/usr/bin/docker kill journal-2-logentries
ExecStartPre=-/usr/bin/docker rm journal-2-logentries
ExecStartPre=/usr/bin/docker pull
quay.io/kelseyhightower/journal-2-logentries
ExecStart=/usr/bin/bash -c \
"/usr/bin/docker run --name journal-2-logentries \
-v /run/journald.sock:/run/journald.sock \
-e LOGENTRIES_TOKEN=$1 \
quay.io/kelseyhightower/journal-2-logentries"
[Install]
WantedBy=multi-user.target
EOU2
systemctl enable systemd-journal-gatewayd.socket
systemctl start systemd-journal-gatewayd.socket
systemctl start journal-2-logentries.service
rm -f $0
Upvotes: 0
Reputation: 487
A downside to using -o short
is that the format is hard to parse; short-iso
is better. If you're using an ELK stack, exporting as JSON is even better. A systemd service like the following will ship JSON-formatted logs to a remote host quite well.
[Unit]
Description=Send Journalctl to Syslog
[Service]
TimeoutStartSec=0
ExecStart=/bin/sh -c '/usr/bin/journalctl -o json -f | /usr/bin/ncat syslog 515'
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
On the far side, logstash.conf
for me includes:
input {
tcp {
port => 1515
codec => json_lines
type => "systemd"
}
}
filter {
if [type] == "systemd" {
mutate { rename => [ "MESSAGE", "message" ] }
mutate { rename => [ "_SYSTEMD_UNIT", "program" ] }
}
}
This results in the whole journalctl data structure being available to Kibana/Elasticsearch.
Upvotes: 7
Reputation: 2790
systemd past version 216 includes remote logging capabilities, via an client / server process pair.
http://www.freedesktop.org/software/systemd/man/systemd-journal-remote.html
Upvotes: 12