Reputation: 11661
Im developing a big architecture, split up in to separate parts.
Each parts uses services (eg: redis) and other projects.
I setup an environment where i can run all the services i need in docker containers with appropriate port mapping, so that duplicate services don't clash.
Now this works all fine if i run my own architecture directly on my pc. But now i'm running my architecture also in/as docker containers (preparing for production), and trying to run these in my system. First they where unable to reach the already setup containers (the services). This i solved by running my own architecture docker containers as --network host
.
Now all my containers are running great, but i can't seem to reach them when i go to http://localhost:80
(one of the containers is running on port 80). The other containers on other ports are also not reachable in this way. Did i do something wrong? is there a way to reach them?
im running docker on windows 10 pro. (note: docker 1.12.5, updating to 1.12.6 crashes somehow)
Upvotes: 1
Views: 935
Reputation: 56538
Using --network host
just attaches your host's network interfaces into the containers. It doesn't necessarily allow port traffic through the system firewall.
If you bind the container ports, e.g. run the containers with -p <host_port>:<container_port>
, Docker should adjust firewall rules accordingly and make it all work. (When the container stops, it should clean up after itself as well.)
Alternately, you can use Docker overlay networks, which is usually the suggested solution in this case.
First, create a network for your application.
docker network create myapp
Then, tell each container to use that network.
docker run --network myapp <other options...>
Upvotes: 2