MaartenDev
MaartenDev

Reputation: 5792

AWS ECS jwilder/nginx-proxy fails to generate servers inside upstream

I am trying to setup jwilder/nginx-proxy as reverse-proxy to proxy requests to various containers that expose the VIRTUAL_HOST=example.com environment variables.

The setup works if the container is started directly on the ec2 cluster host but fails with the following error: "error running notify command: nginx -s reload, exit status 1" if it is spawned from ECS.

The docker log of the container running the jwilder/nginx-proxy container: WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one is being generated in the background. Once the new dhparam.pem is in place, nginx will be reloaded. forego | starting dockergen.1 on port 5000 forego | starting nginx.1 on port 5100 dockergen.1 | 2018/08/19 10:43:37 Generated '/etc/nginx/conf.d/default.conf' from 4 containers dockergen.1 | 2018/08/19 10:43:37 Running 'nginx -s reload' dockergen.1 | 2018/08/19 10:43:37 **Error running notify command: nginx -s reload, exit status 1** dockergen.1 | 2018/08/19 10:43:37 Watching docker events dockergen.1 | 2018/08/19 10:43:37 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload' 2018/08/19 10:48:23 [emerg] 38#38: no servers are inside upstream in /etc/nginx/conf.d/default.conf:55 nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:55 Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time dhparam generation complete, reloading nginx

The environment is configured as the following:

services: 
- name: proxy
  *volumes*:
  Name: docker-socket
  Source Path: /var/run/docker.sock
  *containers*: 
    - name: proxy
      image: jwilder/nginx-proxy
      port: 80:80
      Mount Points:
        Container Path: /tmp/docker.sock
        Source Volume: docker-socket
        Read only: true 
- name: site
  *containers*:
    - name: site
      image: nginx
      port: 0:80
      environment:
      - VIRTUAL_HOST=example.com

Command to test:

curl -H "Host: example.com" localhost:80   

It now returns the default nginx page because the nginx config file failed to generate a valid config because the upstream hosts are missing.

The generated invalid nginx config

proxy_set_header Proxy "";
server {
  server_name _; # This is just an invalid value which will never trigger on a real hostname.
  listen 80;
  access_log /var/log/nginx/access.log vhost;
  return 503;
}
# example.com
upstream example.com {
}
server {
  server_name example.com;
  listen 80 ;
  access_log /var/log/nginx/access.log vhost;
  location / {
    proxy_pass http://example.com
  }
}

The proxy works as intended if the following command is used:

docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

If the command above is run it gives the following output: WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one is being generated in the background. Once the new dhparam.pem is in place, nginx will be reloaded. forego | starting dockergen.1 on port 5000 forego | starting nginx.1 on port 5100 dockergen.1 | 2018/08/19 10:18:48 Generated '/etc/nginx/conf.d/default.conf' from 10 containers dockergen.1 | 2018/08/19 10:18:48 Running 'nginx -s reload' dockergen.1 | 2018/08/19 10:18:48 Watching docker events dockergen.1 | 2018/08/19 10:18:48 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload' 2018/08/19 10:19:09 [notice] 40#40: signal process started Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time dhparam generation complete, reloading nginx The generated valid nginx config:

proxy_set_header Proxy "";
server {
  server_name _; # This is just an invalid value which will never trigger on a real hostname.
  listen 80;
  access_log /var/log/nginx/access.log vhost;
  return 503;
}
# example.com
upstream example.com {
        ## Can be connected with "bridge" network
      # ecs-site-site-add8hjasd
      server 172.17.0.3:80;
}
server {
  server_name example.com;
  listen 80 ;
  access_log /var/log/nginx/access.log vhost;
  location / {
    proxy_pass http://example.com;
  }
}

My question is: why doesn't this work, is it because permissions or mount to the docker socket?

Upvotes: 1

Views: 2834

Answers (3)

rohit pawar
rohit pawar

Reputation: 17

I have got this problem but in my case the Key reasons are - :

  1. I did not register the virtual_host in etc/hosts -> that's one thing

  2. The IP I am giving for another container must be in the same network of Nginx for proxy to that

  3. Make sure the IP of a container to which you are proxy is working properly if it's an container IP then check the log by using Command docker logs -f <container-name>

Upvotes: 0

kain
kain

Reputation: 264

3 days ago, Our team had met this question. We spent a lot of time for that.

Problem reason should be in AWS ecs-agent(we have 2 envrionment, one ecs-agent's version is 1.21 and another one is 1.24)

Yesterday, We solved this problem: Using AWS console to update ecs-agent to the latest version: 1.34 and restart the ecs-agent(docker contianer) Then the problem was solved.

Just paste this solution here. Hope it would be helpful to others!

Upvotes: 2

miknik
miknik

Reputation: 5941

Firstly, it's a good idea to avoid using domain names within your Nginx config, especially when defining upstream servers. It's confusing if nothing else.

Are all your values for example.com the same? If so you have an upstream block which defines an upstream server cluster with the name example.com, then you have a server block with a server name directive of example.com and then you are trying to proxy_pass to example.com.

Typically you specify an upstream block as a method of load balancing if you have several servers capable of handling the same request. Edit your upstream block and include all your upstream servers address:port, you can include other options to configure how Nginx distributes the load across them if you wish, see the Nginx docs for more info. The name you give to your upstream block is only used by Nginx and can be anything, don't use your domain name here. Something like:

upstream dockergroup {

Then add an ip address before the port in the listen directive in the server block and change the proxy_pass directive to http://dockergroup

I'm not sure of the specifics, but according to the docs on the page you linked:

you can To add settings on a per-VIRTUAL_HOST basis, add your configuration file under /etc/nginx/vhost.d. Unlike in the proxy-wide case, which allows multiple config files with any name ending in .conf, the per-VIRTUAL_HOST file must be named exactly after the VIRTUAL_HOST.

The important issues to address are that your upstream block cannot be empty and the name of the upstream block should not conflict with any domains or hostnames on your network. From what I read you should be able to fix that using the various config options.

Upvotes: 0

Related Questions