Reputation: 262504
Is it possible to expose a port from one Docker container to another one (or several other ones), without exposing it to the host?
Upvotes: 3
Views: 4451
Reputation: 262504
I found an alternative to container linking: You can define custom "networks" and tell the container to use them using the --net
option.
For example, if your containers are intended to be deployed together as a unit anyway, you can have them all share the same network stack (using --net container:oneOfThem
). That way you don't need to even configure host names to have them find each-other, they can just share the same 127.0.0.1
and nothing gets exposed to the outside.
Of course, that way they expose all their ports to each-other, and you must be careful not to have conflicts (they cannot both run 8080 for example). If that is a concern, you can still use --net
, just not to share the same network stack, but to set up a more complex overlay network.
Finally, the --net
option can also be used to have a container run directly on the host's network.
Very flexible tool.
Upvotes: 2
Reputation: 4590
Yes, you can link containers together and ports are only exposed for these linked containers, without having to export ports to the host.
For example, if you have a docker container running postgreSQL db:
$ docker run -d --name db training/postgres
You can link to another container running your web application:
$ docker run -d --name web --link db training/webapp python app.py
The container running your web application will have a set of environment variables with the ports exposed in the db container, for example:
DB_PORT_5432_TCP_PORT=5432
The environment variables are created based on the container name, in this case the container name is db, so environment variable starts with DB.
You can find more details in docker documentation here:
https://docs.docker.com/v1.8/userguide/dockerlinks/
Upvotes: 6