Reputation: 1172
So im building a swarm of Elasticsearch nodes and ideally i would like to see two thing happen.
This is what im doing:
docker volume create --opt type=none --opt device=/mnt/data --opt o=bind --name=elastic-data
docker-compose.yml
version: '3'
services:
elastic-node1:
image: amazon/opendistro-for-elasticsearch:0.8.0
environment:
- cluster.name=elastic-cluster
- bootstrap.memory_lock=false
- "ES_JAVA_OPTS=-Xms32g -Xmx32g"
- opendistro_security.ssl.http.enabled=false
- discovery.zen.minimum_master_nodes=1
volumes:
- elastic-data:/mnt/data
ports:
- 9200:9200
- 9600:9600
- 2212:2212
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elastic-net
deploy:
mode: replicated
replicas: 1
volumes:
elastic-data:
external: true
And then i would start the stack, post some data, remove the stack and staring it again but the data is not being retained.
docker stack deploy --compose-file docker-compose.yml opendistrostack
Im a little bit confused about volumes and im not being able to find a good documentation with a detail explanation for each use case. Could you point me on the right direction?
Thanks.
Upvotes: 1
Views: 2111
Reputation: 397
As it is docker swarm support only local volume driver. You will have always fresh data whenever the container is created on a new host.
Common technique is to use a shared volume/fs. I'd suggest to implement GlusterFS, it is distribuite and high scalable fs, very easy to get started and well documented for swarm use cases.
Furthermore you can checkout some 3rd party volume drivers in docker store.
Upvotes: 1