Shukri Adams
Shukri Adams

Reputation: 921

Is it possible to run multiple nginx docker containers on the same host?

Is is possible, using only docker-compose, to mount multiple nginx containers on the same host machine, with each container sharing port 80?

I want to tidy up a system that runs multiple applications on the same host. Each application consists of several linked docker containers tied together with a docker-compose file, and each application is exposed to the world using the host system's nginx as a reverse proxy. So far, so good.

Each time I add a new docker application, I have to add a new nginx.conf file for that application to the host nginx, but I'd prefer bundling the nginx config with the app's docker-compose file as an nginx container, and thus have each app cleanly maintain everything it needs in containers. However, each nginx container needs to listen on port 80, so only the first one can bind. The host nginx can listen for several web applications on port 80, but can multiple nginx instances do the same?

UPDATED :

So it seems this isn't strictly possible. The goal is to have as much application-specific nginx config bundled with the application, so I'm trying a solution where an app still spins up its own nginx container with that logic, while the host nginx handles only url routing to the app nginx. Not sure about performance, but this greatly reduces app entanglement with the host.

Upvotes: 21

Views: 20734

Answers (3)

Nonya
Nonya

Reputation: 3

This is possible. Rather than use NPM (Nginx Proxy Manager) in a Docker container, you just configure the localhost as such, and then as in post above, use proxypass to "forward" the port 80/443 request to whatever nginx1,2,3,4 based on the name of the destination host. That way one WAN interface can have indefinite NGINX containers, proxied by some actual computer running NGINX (which could be same or other computer as that which runs Docker) Docker is not a web server - it is a "container farm", and each container could have a web server. All of them use port 80/443. Without a proxy you'd need a static ip for each, or a different domain name for each. With one domain and one ip, you can route traffic to the containers via DNS SRV records that map a port to a sub domain (like portainer.example.com SRV points to :9002, so it goes to example.com:9002. More common is the NGINX proxy manager all in containers, in docker, and forward port 80 on router to port 8080 on your 192.168.1.x server and it will proxy your requests through the NGINX NPM container running in Docker, to the "invisible" containers behind it that you want to access, all "magically".

Upvotes: 0

Wes
Wes

Reputation: 539

Really old post, but for those arriving from search, you might be able to do this in 2023 by using this compose config:


    version: "3"
    services:
      nginx:
        image: nginx:latest
        volumes:
          - ./nginx.conf:/etc/nginx/nginx.conf:ro
        ports:
          - "80"

Notice:


    ports:
     - "80"

This is not a mistake. Instead of typical port mapping, e.g. 80:80 by only specifying a single port, docker compose should use virtual port sharding to dynamically map requests using round robin to each instance of the service that is running. So to run three instances, you should in theory be able to do this:

docker-compose up --scale nginx=3

That should run 3 instances of nginx. If you defined another service in your compose file, expressjs for example, you could also take advantage of the same strategy by using a simple nginx config that uses docker's embedded dynamic dns.


    # nginx.conf
    user  nginx;
    
    events {
        worker_connections   1000;
    }
    http {
            server {
                  listen 80;
                  location / {
                    proxy_pass http://expressjs:3000;
                  }
            }
    }

And then define your compose like this:


    version: "3"
    
    services:
      expressjs:
        image: expressjs:latest
        ports:
          - "3000"
        depends_on:
          - nginx
      nginx:
        image: nginx:latest
        volumes:
          - ./nginx.conf:/etc/nginx/nginx.conf:ro
        ports:
          - "80"

I haven't tested this but it should work in theory.

Reference: https://docs.docker.com/config/containers/container-networking/

Upvotes: -1

ChosenQuill
ChosenQuill

Reputation: 319

Even though it has been a few years, I thought I might as well answer it for others who come across this issue.

Unfortunately, no. It's not possible to have multiple containers on the same port. This is due to the nature of the operating system. But I think I can understand what you are trying to do. You want to have multiple custom config web applications.

The best way to do this is through a reverse proxy. What a reverse proxy does is that it allows you to forward requests from the main port (such as 80 and 433) to web-servers on other ports or addresses. Assuming that all of your containers have their own instance of Nginx (or any other web server) in them with your config and code. There are two ways to do this.

In docker, the easiest way to do this is through the nginx-proxy project. Just by adding a variable to the environment in your docker-compose, a managed nginx container will automatically forward your requests to the set site. This seems like what you would want to do since it requires only a docker-compose.

You can also manage this yourself. Keep all your sites listening on other ports in nginx, and have one main nginx container listening on 80 and 433 that forwards the requests using reverse proxies you create to another port.

Good Luck!

Upvotes: 31

Related Questions