Reputation: 11
I created a new docker network on my AWS EC2 instance by
docker network create testnet
I have the following docker-compose:
version: '2'
services:
mongodb:
image: mongo:3
container_name: mongodb
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- mongodb_data_db:/data/db
ports:
- 27017:27017
command: mongod --smallfiles --logpath=/dev/null --replSet rs0 # --quiet
volumes:
mongodb_data_db:
networks:
default:
external:
name: testnet
A second container running on the same network is trying to connect to 'mongodb' using this docker-compose:
version: "2"
services:
monstache:
image: rwynn/monstache
container_name: monstache
command: -mongo-url=mongodb -elasticsearch-url=http://elasticsearch:9200 -direct-read-namespace=db.heartbeat -direct-read-split-max=2
networks:
default:
external:
name: testnet
It worked since the last time AWS decided to reboot my instance. After that, I had to restart all containers again, but since then I get an error message from the monstache container saying:
Unable to connect to MongoDB using URL MongoDB: timed out after 15 seconds
Which means that I somehow can not access to the MongoDB container anymore. Also, another container in the network can not connect to 'mongodb' anymore, so I don´t think it´s only a problem with the 'monstache' container. It seems like something has changed in my docker system generally. At least when I run.
docker network inspect testnet
I can see that all containers are listed.
What I have done so far:
Really need help because I am stuck now for 2 days trying to solve this :(
UPDATE: docker-compose.yml for Elasticsearch (and Kibana)
version: "2"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.1
container_name: elasticsearch
volumes:
- elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:6.4.1
container_name: kibana
depends_on:
- elasticsearch
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- 5601:5601
volumes:
elasticsearch:
driver: local
networks:
default:
external:
name: testnet
UPDATE: docker network inspect testnet
:~$ docker network inspect testnet
[
{
"Name": "testnet",
"Id": "448018003d92c8802dd701931e21da018618abce360a147808a5c6b4b51f4b6d",
"Created": "2018-10-08T12:40:10.163231318Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"34df14ddf4ef004115b6e66b35177356a7c0c5e5d0d94d2c05406aa61cd1d744": {
"Name": "kibana",
"EndpointID": "bb38deafbd1929d268ba55c8fb28064d9b0afe7bbfb95289a6893ca62f91ff8b",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"95034d04c4f6c07527f725436a84b20a1514d8aaf70d4e19c54344eb07c7632f": {
"Name": "elasticsearch",
"EndpointID": "269e42333b20dd01152f58329c87060059471a8ea68e3cd97cb45c502b102879",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"c3153881f2a8925bb74718afa9b33c5e9cfcc10f58b2fa7a5157e45b83bea343": {
"Name": "mongodb",
"EndpointID": "44c6ac5755897c056d7285eba83a0934e1871b6c2ca671cbbe846fc55e23ff3e",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Upvotes: 0
Views: 2079
Reputation: 1158
Try this, order matters, basically starting fresh... (copy and past all at once)
docker-compose kill
docker-compose down
docker rm $(docker ps -aq)
docker network rm $(docker network ls)
docker volume rm $(docker volume ls --format {{.Name}})
docker-compose up --force-recreate
If possible you should avoid using ip-addresses for replicas.
Hostnames
Use a logical DNS hostname instead of an ip address, particularly when configuring replica set members or sharded cluster members. The use of logical DNS hostnames avoids configuration changes due to ip address changes.
More info --> replica-set-architectures
You shouldn't need to initiate after a reboot, unless all nodes are shutdown at the same time. I'm assuming this is what happened if all containers are running on 1 instance.
If you leave one up, and replSet
is set in the config file, others will rejoin automagically.
For High availability, consider running the Replica Set in Swarm
. Here is nice walkthrough of setting up a replica set using swarm.
Upvotes: 0
Reputation: 11
I have fixed my problem.
Since I am starting mongod with
--replSet rs0
after the reboot somehow I had to re-instantiate the mongodb replica set. After using
mongo --eval "rs.initiate()"
on the mongodb container I was able to connect from other containers by 'mongodb' container service url.
That´s also the reason why it happened after the reboot. It seems that I have to re-initiate the replicat set always after rebooting occurres. Actually I thought that this should be conserved in the mongodb volume directory, so rebooting would not affect this... But seems I was wrong :)
Thank you all for your time.
Upvotes: 1