Reputation: 11
I have configured nlog with elasticsearch on my application (ASP.NET Core) with nlog.config look like:
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true"
internalLogLevel="Trace"
internalLogFile="\\{ipmachine}\c\Log\internal-nlog.txt">
<extensions>
<add assembly="NLog.Targets.ElasticSearch"/>
</extensions>
<targets>
<target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000">
<target xsi:type="ElasticSearch"/>
</target>
</targets>
<rules>
<logger name="*" minlevel="Debug" writeTo="ElasticSearch" />
</rules>
</nlog>
And on appsetting is : "ElasticsearchUrl": "http://localhost:9200"
When i run application by dotnet-run and I have an elk raised on a dockers like that - https://github.com/deviantony/docker-elk
It work all logs are saved.
But when i add my application to docker image then it doesn't work.
Tried configured it on the same networks on docker-compose, I did it the same with link.
...
elasticsearch:
...
networks:
- elk
...
myapp:
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
Even check the ip of elacticsearch on dockerand change appsetting to "ElasticsearchUrl": "http://172.21.0.2:9200"
Then i added FileBeat on docker-compose:
filebeat:
image: docker.elastic.co/beats/filebeat:6.3.2
#command: "filebeat -e -c /etc/filebeat/filebeat.yml"
environment:
HOSTNAME: "my-server"
LOGSTASH_HOST: "localhost"
LOGSTASH_PORT: "5044"
volumes:
- "./filebeat/config/filebeat.yml:/etc/filebeat/filebeat.yml:rw"
With filebeat.yml -
output:
logstash:
enabled: true
hosts:
- elk:5000
ssl:
certificate_authorities:
- /etc/pki/tls/certs/logstash-beats.crt
timeout: 15
filebeat:
prospectors:
-
paths:
- /var/log/syslog
- /var/log/auth.log
document_type: syslog
-
paths:
- "/var/log/nginx/*.log"
document_type: nginx-access
And the logs are still not saved in the elasticsearch. It only work when I run application not via Docker. I am asking for advice.
My ports:
Upvotes: 1
Views: 2553
Reputation: 263469
And on appsetting is :
"ElasticsearchUrl": "http://localhost:9200"
That is incorrect for container networking, localhost here will only communicate with your container, not the host, and not other containers. With a user created network, and a service name of elasticsearch
in your compose file, you need to be connecting to "ElasticsearchUrl": "http://elasticsearch:9200"
.
Then i added FileBeat on docker-compose
I'd recommend going this direction. Remove the log handling from each application and centralize the retrieve of all container logs, sending them from the docker engine to elastic. To do this, just have your applications log to stdout, not directly to elastic. In swarm mode, I deploy filebeat with this section of my compose file:
filebeat:
image: docker.elastic.co/beats/filebeat:${ELASTIC_VER:-6.2.4}
deploy:
mode: global
configs:
- source: filebeat_yml
target: /usr/share/filebeat/filebeat.yml
mode: 0444
- source: filebeat_prospector_yml
target: /usr/share/filebeat/prospectors.d/default.yml
mode: 0444
volumes:
- '/var/run:/host/var/run:ro'
- '/var/lib/docker:/host/var/lib/docker:ro'
My filebeat.yml file contains:
filebeat.config:
prospectors:
path: ${path.config}/prospectors.d/*.yml
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
#processors:
#- add_cloud_metadata:
output.elasticsearch:
hosts: ['elasticsearch:9200']
username: elastic
password: changeme
And then the prospector default.yml is configured to pull the logs from every container:
- type: log
paths:
- '/host/var/lib/docker/containers/*/*.log'
json.message_key: log
json.keys_under_root: true
Note that I'm using configs, but you can easily switch that to mount these files into the container as volumes.
Upvotes: 1