Reputation: 2566
I'm using elk docker image and using the below docker-compose file to kick start the ELK containers and storing the data in volume.
version: '3.5'
services:
elasticsearch:
build:
context: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
Kibana version 6.6.0
I used below command to start:
docker-compose up -d
Observed that all three containers are up and running and I can able to publish my data into kibana and It can be visualized.
I just had situation to down this compose file and start up . But When I do that activity all the earlier kibana data has been lost.
docker-compose down
Is there any way that I can permanently store those records in machine (Some where in linux box) as backup else any database?
Please help me on this.
Upvotes: 0
Views: 1566
Reputation: 4913
You have to move elasticsearch data dir (/usr/share/elasticsearch/data
) into a persistent docker volume like this:
elasticsearch:
build:
context: elasticsearch/
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
- ./es_data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
Upvotes: 1