Reputation: 5432
Working on getting two different services running inside a single docker-compose.yml
to communicate w. each other within docker-compose
.
The two services are regular NodeJS servers (app1
& app2
). app1
receives POST
requests from an external source, and should then send a request to the other NodeJS server, app2
w. information based on the initial POST
request.
The challenge that I'm facing is how to make the two NodeJS containers communicate w. each other w/o hardcoding a specific container name. The only way I can get the two containers to communicate currently, is to hardcode a url like: http://myproject_app1_1
, which will then direct the POST
request from app1
to app2
correctly, but due to the way Docker increments container names, it doesn't scale very well nor support potential container crashing etc.
Instead I'd prefer to send the POST
request to something along the lines of http://app2
or a similar way to handle and alias a number of containers, and no matter how many instances of the app2
container exists Docker will pass the request one of the running app2
containers.
Here's a sample of my docker-compose.yml
file:
version: '2'
services:
app1:
image: 'mhart/alpine-node:6.3.0'
container_name: app1
command: npm start
app2:
image: 'mhart/alpine-node:6.3.0'
container_name: app2
command: npm start
# databases [...]
Thanks in advance.
Upvotes: 1
Views: 4137
Reputation: 12250
When you run two containers from one compose file, docker automatically sets up an "internal dns" that allows to reference other containers by their service
name defined in the compose file (assuming they are in the same network). So this should work when referencing http://app2
from the first service.
See this example proxying requests from proxy
to the backend whoamiapp
by just using the service name.
default.conf
server {
listen 80;
location / {
proxy_pass http://whoamiapp;
}
}
docker-compose.yml
version: "2"
services:
proxy:
image: nginx
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- "80:80"
whoamiapp:
image: emilevauge/whoami
Run it using docker-compose up -d
and try running curl <dockerhost>
.
This sample uses the default network with docker-compose file version 2. You can read more about how networking with docker-compose works here: https://docs.docker.com/compose/networking/
Probably your configuration of the container_name
property somehow interferes with this behaviour? You should not need to define this on your own.
Upvotes: 1
Reputation: 10185
Ok. This is two questions.
First: how to don`t hardcode container names. you can use system environment variables like:
nodeJS file:
app2Address = process.env.APP2_ADDRESS;
response = http.request(app2Address);
docker compose file:
app1:
image: 'mhart/alpine-node:6.3.0'
container_name: app1
command: npm start
environment:
- APP2_ADDRESS: ${app2_address}
app2:
image: 'mhart/alpine-node:6.3.0'
container_name: app2
command: npm start
environment:
- HOSTNAME: ${app2_address}
and .env file like:
app2_address=myapp2.com
also you can use wildcard application config file. And when container starts you need to substitute real hostname. for this action you need create entrypoint.sh and use "sed" like:
sed -i '/sAPP_2HOSTNAME_WILDCARD/${app2_address}/g /app1/congig.js
Second. how to make a transparent load balancing:
you need use http load balancer like
There is hello-world tutorial how to make a load balancing with docker
Upvotes: 1