Reputation: 179
I have some services managed by Kubernetes. each service has some number of pods. I want to develop a new service to analyze logs of all other services. To do this, I first need to ship the logs to my new service, but I don't know how.
I can divide my question into two parts.
1- how should I access/read the logs? should I read from /var/logs or run apps using pipe like this:
./app | myprogram
which myprogram gets the logs of app as standard input.
2- how can I send logs to another service? my options are gRPC and Kafka(or RabbitMQ).
Using CephFS volume can also be a solution, but it seems that is an anti-pattern(read How to share storage between Kubernetes pods?)
Upvotes: 0
Views: 209
Reputation: 750
The below is basic workflow of how you can collect logs from your pods and send it to a logging tool, i have taken an example of fluent-bit (open source) to explain, but you can use tools like Fluentd/Logstash/Filebeat.
Pods logs are stored in specific path on nodes -> Fluent bit runs as daemonset collects logs from nodes using its input plugins -> use Output plugins of fluent bit and send logs to logging tools (Elastic/ Datadog / Logiq etc)
Fluent Bit is an open source log shipper and processor, that collects data from multiple sources and forwards it to different destinations, Fluent-bit has various input plugins which can be used to collect log data from specific paths or port and you can use output plugin to collect logs at a Elastic or any log collector. please follow the below instruction to install fluent-bit.
https://medium.com/kubernetes-tutorials/exporting-kubernetes-logs-to-elasticsearch-using-fluent-bit-758e8de606af or https://docs.fluentbit.io/manual/installation/kubernetes
To get you started, below is a configuration of how your input plugin should look like, it uses a tail plugin and notice the path, this is where the logs are stored on the nodes which are on the kubernetes cluster. https://docs.fluentbit.io/manual/pipeline/inputs
Parser can be changed according to your requirement or the format of log
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Skip_Long_Lines On
Refresh_Interval 60
Mem_Buf_Limit 1MB
Below is an example of the Output plugin, this is http plugin which is where log collector will be listening, there are various plugins that can be used to configure, depending on the logging tool you choose. https://docs.fluentbit.io/manual/pipeline/outputs
The below uses http plugin to send data to http(80/443) endpoint
[OUTPUT]
Name http
Match *
Host <Hostname>
Port 80
URI /v1/json_batch
Format json
tls off
tls.verify off
net.keepalive off
compress gzip
below is an output to elastic.
[OUTPUT]
Name es
Match *
Host <hostname>
Port <port>
HTTP_User <user-name>
HTTP_Passwd <password>
Logstash_Format On
Retry_Limit False
Upvotes: 1