ivo
ivo

Reputation: 587

nginx reverse proxy with docker - load balancing

I want to make a simple configuration regarding nginx as reverse proxy with using Docker technology. I am considering using this solution: https://hub.docker.com/r/jwilder/nginx-proxy/.

I need to achieve the following behavior:

->when request comes to www.example.com, it will go to container A

->when request comes to www.example.com/part, it will go to container B (different container)

I am not so good at Nginx server configuration. I know, that I need to pass the VIRTUAL_HOST, VIRTUAL_PORT variables when running other containers I need to proxy, but I do not know what to adjust or perhaps pass in order force nginx server to switch traffic based on location. Is it a matter of Nginx server and its location directive or is it something other which needs to be adjusted?

Thanks in advance for your precious time and energy spent here.

So here is an update to this question - hopefully I will describe more precisely where my problem lies. Well, I've already spent a lot of time on this and the other thing is I am allowed to play with configuration only during the night when nobody is using the systems.

We have in our environment Docker daemon running on RHEL7 (Maipo 7.3). There we have couple of docker containers and one of them is the one based on image mkodockx/docker-nginx-proxy:stable. See please below the configuration file - the basic idea is to run every image with switch -e VIRTUAL_HOST=some_host.

I thought that I need to find a way to pass another environment variable to get correct location directive created; I was adjusting the nginx.tmpl within /app directory in image.

See please configuration file:

# Generated nginx site conf                        
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}

gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
error_log /proc/self/fd/2;
client_max_body_size 10m;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;

server {
        listen 80 default_server;
        server_name _;
        return 503;
}      
upstream A.B.C.com {    
    server 172.17.0.7:8080;    
}        
server {
    limit_conn perip 50;
    limit_req zone=persec burst=80 nodelay;
    server_name A.B.C.com; 
    location / {
            proxy_pass http://A.B.C.com; 
    }
}        
upstream D.E.F.com {    
    server 172.17.0.16:8080;    
}        
server {
    limit_conn perip 50;
    limit_req zone=persec burst=80 nodelay;
    server_name D.E.F.com; 
    location / {
            proxy_pass http://D.E.F.com; 
    }
}        
upstream G.H.I.com {    
    server 172.17.0.8:80;    
}        
server {
    limit_conn perip 50;
    limit_req zone=persec burst=80 nodelay;
    server_name G.H.I.com; 
    location / {
            proxy_pass http://G.H.I.com; 
    }
}        
upstream J.K.L.com {    
    server 172.17.0.5:80;    
}        
server {
    limit_conn perip 50;
    limit_req zone=persec burst=80 nodelay;
    server_name J.K.L.com; 
    location / {
            proxy_pass http://J.K.L.com; 
    }
}        
upstream M.N.O.com {    
    server 172.17.0.6:80;    
}        
server {
    limit_conn perip 50;
    limit_req zone=persec burst=80 nodelay;
    server_name M.N.O.com; 
    location / {
            proxy_pass http://M.N.O.com; 
    }
}   

Here are my steps:

1) I logged into container using docker exec -it proxy bash

2) then I adjusted nginx.tmpl file - using vim

3) then I run forego start -r command in container

4) then I checked generated default.conf file - Nginx was automatically restarted

5) then I tried to verify URLs

My problem is that I need to have somehow another container which will serve its content from URL, let's say D.E.F.com/schedule; current nginx.tmpl template will just add internal container's IP (172.x.x.x:80) and creates another server block/part; I had no way to specify location part.

So I've come up with my solution (new nginx template) which enables to specify (to pass) another variable. But still what is happening at the end is that all the content for a new container is served from root ("/") and not from /schedule part. Yet, the existing container which serves content from URL D.E.F.com actually has its content at URL D.E.F.com/content - but this location is not specified within produced nginx configuration as you may see. Both servers are added into the same upstream section automatically. But that means, at least according to the documetation, that there will be kind of load-balancing which is not what is expected here. I feel lost here.

So what I am overlooking here or what is wrong?

Again, many thanks for spending your time and energy on this.

Upvotes: 1

Views: 846

Answers (1)

Janshair Khan
Janshair Khan

Reputation: 2687

Its quite related to Nginx configurations. With official Nginx Docker image, you need to add Nginx configurations something like:

server {
   listen 80 default_server;
   listen [::]:80 default_server;

   server_name www.example.com;

   location / {
            # First attempt to serve request as file, then
            # as directory, then fall back to displaying a 404.
            try_files $uri $uri/ =404;

            proxy_pass http://container_A:80;
    }
}

and

server {
   listen 80 default_server;
   listen [::]:80 default_server;

   server_name www.example.com;

   location /part {
            # First attempt to serve request as file, then
            # as directory, then fall back to displaying a 404.
            try_files $uri $uri/ =404;

            proxy_pass http://container_B:80;
    }
}

The trickier point here is the host container_A and container_B. These are the 2 containers represented by --name container_A and --name container_B flags and you Must run them in a user-defined Network by first creating a Docker network via docker network create <name-of-the-network> and run each container in the same Docker network by specifying:

docker run .... --network=<name-of-the-network> ....

The service discovery feature in Docker Engine will discover the host container_A and container_B in the user-defined network. Here is an example.

Upvotes: 1

Related Questions