Reputation: 8865
I'm actually reading through the Docker documentation for the Dockerfile. There is a part where you define how your ports in the container are exposed. While reading that description I found out I have a problem understanding a particular difference here.
from: https://docs.docker.com/engine/reference/builder/#expose
The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. EXPOSE does not make the ports of the container accessible to the host.
What is the difference between a container (or server application) listening to a port and making it accessible?
If the application is listening to a port - I can for example launch a HTTP Request on it and it will answer me, right? Isn't that some kind of access I have to it as a host (which defined the outside context here)?
Upvotes: 3
Views: 2353
Reputation: 1199
The documentation is a bit confusing.
With the default bridge network, all containers running on a host and the host itself all have access to each other (i.e all ports) through their internal (bridge) ip addresses (Unless of course a firewall on the host or containers is configured to prevent access). This behavior is independent of any EXPOSE
, -p
and -P
options.
Since the default bridge network is internal to a host, other hosts on your external network will not be able to reach your containers via the internal network. This is where the -p
and -P
options comes in by exposing a port on your host's external interface(s) that is forwarded to a container.
The -p
option requires that you specify the container port to forward to, but specifying the host port is optional (Host ports will be randomly selected if not specified).
The -P
relies on EXPOSED ports to automatically setup port forwarding to a container's listening ports. Host ports are automatically selected.
Upvotes: 0
Reputation: 5232
Expose
just provides a hint (information) which ports are being exposed by image. Assuming that you are asking about bridged (default) container - they are isolated and not accessible from the host network, being protected by host's firewall. So if you are interested in inbound traffic , it's required to create mapping between host network and container interface. Think of it like opening up a window to outside world on a particular port.
Let's say you that image we are interested in exposes ports 5000 and 6000 and you would like your to map your container ports to the outside world.
Using -P (--publish-all)
you can ask Docker daemon to create mappings for all ports, that image exposes. Or using -p
you can dynamically assign mapping.
For example :
docker run -d --name my_app -p 5000 -p 6000 my_image // this will map both exposed ports
which is same as
docker run -d --name my_app -P my_image // this will map all exposed ports (5000 and 5000)
or you can even add additional port to be exposed
docker run -d --name my_app -expose 8000 -P my_image // now 5000, 6000, 8000 are mapped
or you can remap to a different port, for example :
docker run ... -p 3000:4000 ... //
host_port:container_port
Once mapped you can check see all port mappings
docker port my_app
(or container ID instead of my_app)
This will give you something like this
5000/tcp -> 0.0.0.0:32773
6000/tcp -> 0.0.0.0:32772
8000/tcp -> 0.0.0.0:32771
Upvotes: 2
Reputation: 473
It will listen on the port, but will not let it be accessible from the host machine by default. You can pass in -P
, like docker run -P my-docker-image
to start a container with ports accessible to the host machine, but you still have to inspect the cluster (docker ps
) to see which port you have to use when sending requests from the host.
This is so you have control over which port it listens to on the host machine when running the image, rather than it being a hardcoded value. You can spin up two containers using the same image and separate host ports will be mapped to each container/image port that's exposed.
Upvotes: 1