ton1
ton1

Reputation: 7628

Docker compose of nginx, express, letsencrypt SSL get 502 Bad gateway

I am trying to find a way to publish nginx, express, and letsencrypt's ssl all together using docker-compose. There are many documents about this, so I referenced these and tried to make my own configuration, I succeed to configure nginx + ssl from this https://medium.com/@pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71

So now I want to put sample nodejs express app into nginx + ssl docker-compose. But I don't know why, I get 502 Bad Gateway from nginx rather than express's initial page.

I am testing this app with my left domain, and on aws ec2 ubuntu16. I think there is no problem about domain dns and security rules settings. All of 80, 443, 3000 ports opened already. and When I tested it without express app it shows well nginx default page.

nginx conf in /etc/nginx/conf.d

server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name example.com;
    server_tokens off;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

    ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

}

docker-compose.yml

version: '3'

services:
  app:
    container_name: express
    restart: always
    build: .
    ports: 
      - '3000:3000'
  nginx:
    container_name: nginx
    image: nginx:1.15-alpine
    restart: unless-stopped
    volumes:
      - ./data/nginx:/etc/nginx/conf.d
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    ports:
      - "80:80"
      - "443:443"
    command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
  certbot:
    image: certbot/certbot
    restart: unless-stopped
    volumes:
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"

Dockerfile of express

FROM node:12.2-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]

I think SSL works fine, but there are some problems between express app and nginx. How can I fix this?

Upvotes: 0

Views: 3024

Answers (4)

Vasily  Bodnarchuk
Vasily Bodnarchuk

Reputation: 25304

Tasks

  • build NodeJS app
  • add SSL functionality from the box (that can work automatically)

Solution

https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion

/ {path_to_the_project} /Docker-compose.yml

version: '3.7'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy:alpine
    restart: always
    container_name: nginx-proxy
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./certs:/etc/nginx/certs:ro
      - ./vhost.d:/etc/nginx/vhost.d
      - ./html:/usr/share/nginx/html
      - ./conf.d:/etc/nginx/conf.d
    ports:
      - "443:443"
      - "80:80"
    labels:
      - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"

  letsencrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: letsencrypt
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./certs:/etc/nginx/certs:rw
      - ./vhost.d:/etc/nginx/vhost.d:rw
      - ./html:/usr/share/nginx/html:rw
    environment:
      - NGINX_PROXY_CONTAINER=nginx-proxy

  api:
    container_name: ${APP_NAME}
    build:
      context: .
      dockerfile: Dockerfile
    command: npm start --port ${APP_PORT}
    expose:
      - ${APP_PORT}
    # ports:
    #   - ${APP_PORT}:${APP_PORT}
    restart: always
    environment:
      VIRTUAL_PORT: ${APP_PORT}
      VIRTUAL_HOST: ${DOMAIN}
      LETSENCRYPT_HOST: ${DOMAIN}
      LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
      NODE_ENV: production
      PORT: ${APP_PORT}
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./certs:/etc/nginx/certs:ro

/ {path_to_the_project} /.env

APP_NAME=best_api
APP_PORT=3000
DOMAIN=api.site.com
[email protected]

Do not forget to connect DOMAIN to you server before you will run container there.

How it works?

just run docker-compose up --build -d

Upvotes: 0

abhaga
abhaga

Reputation: 5455

proxy_pass http://localhost:3000

is proxying the request to the 3000 port on the container that is running nginx. What you instead want is to connect to the 3000 port of the container running express. For that, we need to do two things.

First, we make the express container visible to nginx container at a predefined hostname. We can use links in docker-compose.

nginx:
  links:
    - "app:expressapp"

Alternatively, since links are now considered a legacy feature, a better way is to use a user defined network. Define a network of your own with

docker network create my-network 

and then connect your containers to that network in compose file by adding the following at the top level:

networks:
    default:
        external:
            name: my-network

All the services connected to a user defined network can access each other via name without explicitly setting up links.

Then in the nginx.conf, we proxy to the express container using that hostname:

location / {
    proxy_pass http://app:3000
}

Upvotes: 1

masseyb
masseyb

Reputation: 4150

Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.

Define networks in your docker-compose.yml and configure your services with the appropriate network:

version: '3'

services:
  app:
    restart: always
    build: .
    networks:
      - backend
    expose:
      - "3000"
  nginx:
    image: nginx:1.15-alpine
    restart: unless-stopped
    depends_on:
      - app
    volumes:
      - ./data/nginx:/etc/nginx/conf.d
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    networks:
      - frontend
      - backend
    ports:
      - "80:80"
      - "443:443"
    command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
  certbot:
    image: certbot/certbot
    restart: unless-stopped
    volumes:
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
  frontend:
  backend:

Note: the app service no longer publish's it's ports to the host it only exposes port 3000 (ref. exposing and publishing ports), it is only available to services connected to the backend network. The nginx service has a foot in both the backend and frontend network to accept incoming traffic from the frontend and proxy the connections to the app in the backend (ref. multi-host networking).

With user-defined networks you can resolve the service name:

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
  worker_connections 1024;
}
http {
  upstream app {
    server app:3000 max_fails=3;
  }
  server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
  }
  server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name example.com;
    server_tokens off;

    location / {
        proxy_pass http://app;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

    ssl_certificate /etc/letsencrypt/live/sendpi.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/sendpi.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
  }
}

Removing the container_name from your services makes it possible to scale the services: docker-compose up -d --scale nginx=1 app=3 - nginx will load balance the traffic in round-robin to the 3 app containers.

Upvotes: 1

DragonBobZ
DragonBobZ

Reputation: 2463

I think maybe a source of confusion here is the way the "localhost" designation behaves among running services in docker-compose. The way docker-compose orchestrates your containers, each of the containers understands itself to be "localhost", so "localhost" does not refer to the host machine (and if I'm not mistaken, there is no way for a container running on the host to access a service exposed on a host port, apart from maybe some security exploits). To demonstrate:

services:
  app:
    container_name: express
    restart: always
    build: .
    ports: 
      - '2999:3000' # expose app's port on host's 2999

Rebuild

docker-compose build
docker-compose up

Tell container running the express app to curl against its own running service on port 3000:

$ docker-compose exec app /bin/bash -c "curl http://localhost:3000"

<!DOCTYPE html>
<html>
  <head>
    <title>Express</title>
    <link rel='stylesheet' href='/stylesheets/style.css' />
  </head>
  <body>
    <h1>Express</h1>
    <p>Welcome to Express</p>
  </body>
</html>

Tell app to try to that same service which we exposed on port 2999 on the host machine:

$ docker-compose exec app /bin/bash -c "curl http://localhost:2999"
curl: (7) Failed to connect to localhost port 2999: Connection refused

We will of course see this same behavior between running containers as well, so in your setup nginx was trying to proxy it's own service running on localhost:3000 (but there wasn't one, as you know).

Upvotes: 0

Related Questions