Jivan
Jivan

Reputation: 23088

Communication between multiple docker-compose projects

I have two separate docker-compose.yml files in two different folders:

How can I make sure that a container in front can send requests to a container in api?

I know that --default-gateway option can be set using docker run for an individual container, so that a specific IP address can be assigned to this container, but it seems that this option is not available when using docker-compose.

Currently I end up doing a docker inspect my_api_container_id and look at the gateway in the output. It works but the problem is that this IP is randomly attributed, so I can't rely on it.

Another form of this question might thus be:

But in the end what I'm looking after is:

Upvotes: 690

Views: 614767

Answers (23)

Patrick Fromberg
Patrick Fromberg

Reputation: 1417

The answers basically suggest to join the networks. But most of the time you do not want all containers of one network to communicate with all of the containers of the other network. Mostly the two networks will also have different settings which makes it impossible to join them. The solution is multihomed containers.

Example: One network with web server in isolated network. Another network with a revers proxy for the web server. The proxy network may of course not be isolated. Here is the corresponding docker compose definition:

name: webservice
services:
  webserver:
    networks:
      webnet:
        ipv4_address: 172.20.0.2
  proxyserver:
    networks:
      proxynet:
        ipv4_address: 172.19.0.111
      webnet:
        ipv4_address: 172.20.0.111
    ports:
      - "80:80"

networks:
  proxynet:
    driver: bridge
    internal: false
    ipam:
      driver: default
      config:
        -
          subnet: 172.19.0.0/16
  webnet:
    driver: bridge
    internal: true
    ipam:
      driver: default
      config:
        -
          subnet:172.20.0.0/17

Upvotes: 0

Cheryl Murphy
Cheryl Murphy

Reputation: 51

What worked for me today, some of the above settings were deprecated

first docker-compose.yml

networks:
  default:
    name: network-name

second docker-compose.yml

services:
  app:
    ...
    networks:
     - network-name

https://docs.docker.com/compose/networking/#specify-custom-networks

Upvotes: 1

Tal Joffe
Tal Joffe

Reputation: 5838

Just a small addition to @johnharris85's great answer, when you are running a docker compose file, a default network is created so you can just add it to the other compose file as an external network:

# front/docker-compose.yml
services:
  front_service:
    ...

...

# api/docker-compose.yml
services:
  api_service:
    ...
    networks:
      - front_default
networks:
  front_default:
    external: true

For me this approach was more suited because I did not own the first docker-compose file and wanted to communicate with it.

Upvotes: 134

johnharris85
johnharris85

Reputation: 18966

You just need to make sure that the containers you want to talk to each other are on the same network. Networks are a first-class docker construct, and not specific to compose.

# front/docker-compose.yml
services:
  front:
    ...
    networks:
      - some-net
networks:
  some-net:
    driver: bridge

...

# api/docker-compose.yml
services:
  api:
    ...
    networks:
      - front_some-net
networks:
  front_some-net:
    external: true

Note: Your app’s network is given a name based on the “project name”, which is based on the name of the directory it lives in, in this case a prefix front_ was added

They can then talk to each other using the service name. From front you can do ping api and vice versa.

Upvotes: 795

Rishabh Rawat
Rishabh Rawat

Reputation: 1194

All the previous answers is related to the old version of the docker-compose. I am using 3.8 version of docker compose and I have implemented this for the kafka consumer and producer but you can do the same for the other things as well.

So, here is, how i achived this

Kafka Consumer file

version: "3.8"

services:
  zookeeper:
    image: zookeeper:3.4.10
    environment:
      ZOO_MY_ID: 1
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zookeeper:2888:3888
    healthcheck:
      test: [ "CMD-SHELL", "echo ruok | nc localhost 2181 | grep imok" ]
      interval: 10s
      timeout: 10s
      retries: 5
    networks:
      - kafkaNet

  kafka:
    image: confluentinc/cp-kafka:4.1.4
    depends_on:
      - zookeeper
    ports:
      - 9093:9093
    environment:
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:19093,EXTERNAL://127.0.0.1:9093
      KAFKA_BROKER_ID: 1
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      TOPIC_AUTO_CREATE: 1
    healthcheck:
      test: [ "CMD-SHELL", "kafka-broker-api-versions --bootstrap-server kafka:19093" ]
      interval: 10s
      timeout: 10s
      retries: 5
    hostname: google.kafka.local.com
    networks:
      - kafkaNet

networks:
  kafkaNet:
    driver: bridge
    name: kafkanetwork

Now you can see, I added the hostname as

google.kafka.local.com

so, this is the url which I am using on the different docker compose code files to avoid to write the hard coded ip address of this container again-again.

Kafka Producer file

version: '3.8'
 
services:

  awsapi:
    build:
      dockerfile: apis.Dockerfile
      context: .
    ports:
      - "1235:1325"
    restart: always
    networks:
      - awsapinetwork

  mysqldb:
    image: mysql:latest
    container_name: mysqlDb
    command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
    environment:
      MYSQL_DATABASE: "${MYSQL_DATABASE_NAME}"
      MYSQL_USER: "${MYSQL_DATABASE_USER_NAME}"
      MYSQL_PASSWORD: "${MYSQL_DATABASE_USER_PASSWORD}"
      MYSQL_ROOT_PASSWORD: "${MYSQL_DATABASE_ROOT_PASSWORD}"
    ports:
      - "3305:3306"
    volumes:
      - dbdata:/var/lib/mysql
      - ./sql/schema.sql:/docker-entrypoint-initdb.d/schema.sql:ro
      - ./sql/world.sql:/docker-entrypoint-initdb.d/world.sql:ro
    networks:
      - awsapinetwork

  phpmyadmin:
    image: phpmyadmin/phpmyadmin
    container_name: phpmyadminpanel
    links:
      - mysqldb
    environment:
      PMA_HOST: mysqldb
      MYSQL_ROOT_PASSWORD: "${MYSQL_DATABASE_ROOT_PASSWORD}"
    restart: always
    ports:
      - 8085:80
    networks:
      - awsapinetwork
volumes:
  dbdata:

networks:
  awsapinetwork:
    external: true
    name: kafkanetwork 

Now I can send the data to kafka consumer container throught the simple network request from awsapi container

curl google.kafka.local.com:9093

Upvotes: 0

Solid Future
Solid Future

Reputation: 23

Here is an example that uses IP Addresses. The first docker compose should create the network that future containers can join. Here is a snippet code snippet.

version: "3"
services:
  app:
    image: "jc21/nginx-proxy-manager:latest"
    restart: unless-stopped
    ports:
      - "80:80"
      - "81:81"
      - "443:443"
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
    networks:
      customnetwork:
        ipv4_address: 172.20.0.10
networks:
  customnetwork:
    ipam:
      config:
        - subnet: 172.20.0.0/24

The second docker-compose should join the network that was created:

version: "3"
services:
  portainer:
    image: portainer/portainer-ce:latest
    container_name: portainer
    command: -H unix:///var/run/docker.sock
    ports:
      - 9000:9000
      - 9443:9443
    volumes:
      - portainer_data:/data
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      nginxproxymanager_customnetwork:
        ipv4_address: 172.20.0.11
    restart: unless-stopped
volumes:
  portainer_data:
networks:
  nginxproxymanager_customnetwork:
    external: true

Source: WordPress/MYSQL Docker Compose with Networking

Upvotes: 0

Hemant Kumar
Hemant Kumar

Reputation: 161

Follow up of JohnHarris answer, just adding some more details which may be useful to someone: Lets take two docker-compose file and connect them through networks:

  1. 1st foldername/docker-compose.yml:
version: '2'
services:
  some-contr:
    container_name: []
    build: .
    ...
    networks:
      - somenet
    ports:
      - "8080:8080"
    expose:
      # Opens port 8080 on the container
      - "8080"
    environment:
      PORT: 8080
    tty: true
networks:
  boomnet:
    driver: bridge
  1. 2nd docker-compose.yml:
version: '2'
services: 
  pushapiserver:
    container_name: [container_name]
    build: .
    command: "tail -f /dev/null"
    volumes:
      - ./:/[work_dir]
    working_dir: /[work dir]
    image: [name of image]
    ports:
      - "8060:8066"
    environment:
      PORT: 8066
    tty: true
    networks:
      - foldername_somenet
networks:
  foldername_somenet:
    external: true

Now you can make api calls to one another services(b/w diff containers)like: http://pushapiserver:8066/send_push call from some code in files for 1st docker-compose.yml

Two common mistakes (atleast i made them few times):

  1. take note of [foldername] in which your docker-compose.yml file is present. Please see above in 2nd docker-compose.yml i have added foldername in network bc docker create network by [foldername]_[networkname]
  2. Port: this one is very common. Please note i have used 8066 when trying to make connection i.e. http://pushapiserver:8066/... 8066 is port of docker container(2nd docker-compose.yml) so when trying to talk with different docker compose.

docker will use docker container port[8066] and not host machine mapped port [8060]

Upvotes: 1

datnguyen
datnguyen

Reputation: 61

I'm running multiple identical docker-compose.yml files in different directories, using .env files to make a slight difference. And use Nginx Proxy Manage to communicate with other services. here is my file:

make sure you have created a public network

docker network create nginx-proxy-man

/domain1.com/docker-compose.yml, /domain2.com/docker-compose.yml, ...

version: "3.9"

services:
  webserver:
    build:
      context: ./bin/${PHPVERSION}
    container_name: "${COMPOSE_PROJECT_NAME}-${PHPVERSION}"
    ...
    networks:
      - default    # network outside
      - internal   # network internal
  database:
    build:
      context: "./bin/${DATABASE}"
    container_name: "${COMPOSE_PROJECT_NAME}-${DATABASE}"
    ...
    networks:
      - internal   # network internal


networks:
  default:
    external: true
    name: nginx-proxy-man
  internal:
    internal: true

.env file just change COMPOSE_PROJECT_NAME

COMPOSE_PROJECT_NAME=domain1_com
.
.
.
PHPVERSION=php56

DATABASE=mysql57

webserver.container_name: domain1_com-php56 - will join the default network (name: nginx-proxy-man), previously created for Nginx Proxy Manager to be accessible from the outside.

Note: container_name is unique in the same network.

database.container_name: domain1_com-mysql57 - easier to distinguish

In the same docker-compose.yml, the services will connect to each other via the service name because of the same network domain1_com_internal. And to be more secure, set this network with the option internal: true

Note, if you don't explicitly specify networks for each service, but just use a common external network for both docker-compose.yml, then it's likely that domain1_com will use domain2_com's database.

Upvotes: 3

To connect two docker-compose you need a network and putting both docker-composes in that network, you could create netwrok with docker network create name-of-network,

or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.

put the below lines in both docker-compose files

networks:
   net-for-alpine:
     name: test-db-net

Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different, test-db-net is external name of the network and must be same in two docker-compose files.

Assume we have docker-compose.db.yml and docker-compose.alpine.yml

docker-compose.apline.yml would be:

version: '3.8'

services:

  alpine:
    image: alpine:3.14
    container_name: alpine
    networks:
      - net-for-alpine
  
    # these two command keeps apline container running
    stdin_open: true # docker run -i
    tty: true # docker run -t



networks:
  net-for-alpine:
    name: test-db-net

docker-compose.db.yml would be:

version: '3.8'

services:

  db:
    image: postgres:13.4-alpine
    container_name: psql
    networks:
      - net-for-db
  
networks:
  net-for-db:
    name: test-db-net

To test the network, go inside alpine container

docker exec -it alpine sh 
      

then with following commands you could check the network

# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)  

or

ping pgsql

Upvotes: 5

Muhammad Waqas Dilawar
Muhammad Waqas Dilawar

Reputation: 2342

Everybody has explained really well, so I'll add the necessary code with just one simple explanation.

Use a network created outside of docker-compose (an "external" network) with docker-compose version 3.5+.

Further explanation can be found here.

First docker-compose.yml file should define network with name giveItANamePlease as follows.

networks:
  my-network:
    name: giveItANamePlease
    driver: bridge

The services of first docker-compose.yml file can use network as follows:

networks:
  - my-network

In second docker-compose file, we need to proxy the network by using the network name which we have used in first docker-compose file, which in this case is giveItANamePlease:

networks:
  my-proxy-net:
    external:
      name: giveItANamePlease

And now you can use my-proxy-net in services of a second docker-compose.yml file as follows.

networks:
  - my-proxy-net

Upvotes: 64

Gael
Gael

Reputation: 644

So many answers!

First of all, avoid hyphens in entities names such as services and networks. They cause issues with name resolution.

Example: my-api won't work. myapi or api will work.

What worked for me is:

# api/docker-compose.yml
version: '3'

services:
  api:
    container_name: api
    ...
    ports:
      - 8081:8080
    networks:
      - mynetwork

networks:
  mynetwork:
    name: mynetwork

and

# front/docker-compose.yml
version: '3'

services:
  front:
    container_name: front
    ...
    ports:
      - 81:80
    networks:
      - mynetwork

networks:
  mynetwork:
    name: mynetwork

NOTE: I added ports to show how services can access each other, and how they are accessible from the host.

IMPORTANT: If you don't specify a network name, docker-compose will craft one for you. It uses the name of the folder the docker_compose.yml file is in. In this case: api_mynetwork and front_mynetwork. That will prevent communication between containers since they will by on different network, with very similar names.

Note that the network is defined exactly the same in both file, so you can start either service first and it will work. No need to specify which one is external, docker-compose will take care of managing that for you.

From the host

You can access either container using the published ports defined in docker-compose.yml.

You can access the Front container: curl http://localhost:81

You can access the API container: curl http://localhost:8081

From the API container

You can access the Front container using the original port, not the one you published in docker-compose.yml.

Example: curl http://front:80

From the Front container

You can access the API container using the original port, not the one you published in docker-compose.yml.

Example: curl http://api:8080

Upvotes: 28

Affes Salem
Affes Salem

Reputation: 1649

I have had a similar example where I was working with separate docker-compose files working on a docker swarm with an overlay network to do that all I had to do is change the networks parameters as so:

first docker-compose.yaml

version: '3.9'
.
.
.

networks:
  net:
    driver: overlay
    attachable: true
docker-compose -p app up

since I have specified the app name as app using -p the initial network will be app_net. Now in order to run another docker-compose with multiple services that will use the same network you will need to set these as so:

second docker-compose.yaml

version: '3.9'
.
.
.
networks:
  net-ref:
    external: true
    name: app_net
docker stack deploy -c docker-compose.yml mystack

No matter what name you give to the stack the network will not be affected and will always refer to the existing external network called app_net.

PS: It's important to make sure to check your docker-compose version.

Upvotes: 0

cstrutton
cstrutton

Reputation: 6197

UPDATE: As of compose file version 3.5:

This now works:

version: "3.5"
services:
  proxy:
    image: hello-world
    ports:
      - "80:80"
    networks:
      - proxynet

networks:
  proxynet:
    name: custom_network

docker-compose up -d will join a network called 'custom_network'. If it doesn't exist, it will be created!

root@ubuntu-s-1vcpu-1gb-tor1-01:~# docker-compose up -d
Creating network "custom_network" with the default driver
Creating root_proxy_1 ... done

Now, you can do this:

version: "2"
services:
  web:
    image: hello-world
    networks:
      - my-proxy-net
networks:
  my-proxy-net:
    external:
      name: custom_network

This will create a container that will be on the external network.

I can't find any reference in the docs yet but it works!

Upvotes: 304

Nomiluks
Nomiluks

Reputation: 2092

UPDATE: As of docker-compose file version 3.5:

I came across a similar problem and I solved it by adding a small change in one of my docker-compose.yml project.

For instance, we have two API's scoring and ner. Scoring API needs to send a request to the ner API for processing the input request. In order to do that they both are supposed to share the same network.

Note: Every container has its own network which is automatically created at the time of running the app inside docker. For example ner API network will be created like ner_default and scoring API network will be named as scoring default. This solution will work for version: '3'.

As in the above scenario, my scoring API wants to communicate with ner API then I will add the following lines. This means Whenever I create the container for ner API then it automatically added to the scoring_default network.

networks:
  default:
      external:
        name: scoring_default

ner/docker-compose.yml

version: '3'
services:
  ner:
    container_name: "ner_api"
    build: .
    ...

networks:
  default:
      external:
        name: scoring_default

scoring/docker-compose.yml

version: '3'
services:
  api:
    build: .
    ...

We can see this how the above containers are now a part of the same network called scoring_default using the command:

docker inspect scoring_default

{
    "Name": "scoring_default",
        ....
    "Containers": {
    "14a6...28bf": {
        "Name": "ner_api",
        "EndpointID": "83b7...d6291",
        "MacAddress": "0....",
        "IPv4Address": "0.0....",
        "IPv6Address": ""
    },
    "7b32...90d1": {
        "Name": "scoring_api",
        "EndpointID": "311...280d",
        "MacAddress": "0.....3",
        "IPv4Address": "1...0",
        "IPv6Address": ""
    },
    ...
}

Upvotes: 11

Rafał
Rafał

Reputation: 602

If you are

  • trying to communicate between two containers from different docker-compose projects and don't want to use the same network (because let's say they would have PostgreSQL or Redis container on the same port and you would prefer to not changing these ports and not use it at the same network)
  • developing locally and want to imitate communication between two docker compose projects
  • running two docker-compose projects on localhost
  • developing especially Django apps or Django Rest Framework (drf) API and running app inside container on some exposed port
  • getting Connection refused while trying to communicate between two containers

And you want to

  • container api_a communicate to api_b (or vice versa) without the same "docker network"

(example below)

you can use "host" of the second container as IP of your computer and port that is mapped from inside Docker container. You can obtain IP of your computer with this script (from: Finding local IP addresses using Python's stdlib):

import socket
def get_ip():
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    try:
        # doesn't even have to be reachable
        s.connect(('10.255.255.255', 1))
        IP = s.getsockname()[0]
    except:
        IP = '127.0.0.1'
    finally:
        s.close()
    return IP

Example:

project_api_a/docker-compose.yml:

networks:
  app-tier:
    driver: bridge

services:
  api:
    container_name: api_a
    image: api_a:latest
    depends_on:
      - postgresql
    networks:
      - app-tier

inside api_a container you are running Django app: manage.py runserver 0.0.0.0:8000

and second docker-compose.yml from other project:

project_api_b/docker-compose-yml :

networks:
  app-tier:
    driver: bridge

services:
  api:
    container_name: api_b
    image: api_b:latest
    depends_on:
      - postgresql
    networks:
      - app-tier

inside api_b container you are running Django app: manage.py runserver 0.0.0.0:8001

And trying to connect from container api_a to api_b then URL of api_b container will be: http://<get_ip_from_script_above>:8001/

It can be especially valuable if you are using even more than two(three or more) docker-compose projects and it's hard to provide common network for all of it - it's good workaround and solution

Upvotes: 3

Exagone313
Exagone313

Reputation: 115

You can add a .env file in all your projects containing COMPOSE_PROJECT_NAME=somename.

COMPOSE_PROJECT_NAME overrides the prefix used to name resources, as such all your projects will use somename_default as their network, making it possible for services to communicate with each other as they were in the same project.

NB: You'll get warnings for "orphaned" containers created from other projects.

Upvotes: 7

leonardo rey
leonardo rey

Reputation: 737

Another option is just running up the first module with the 'docker-compose' check the ip related with the module, and connect the second module with the previous net like external, and pointing the internal ip

example app1 - new-network created in the service lines, mark as external: true at the bottom app2 - indicate the "new-network" created by app1 when goes up, mark as external: true at the bottom, and set in the config to connect, the ip that app1 have in this net.

With this, you should be able to talk with each other

*this way is just for local-test focus, in order to don't do an over complex configuration ** I know is very 'patch way' but works for me and I think is so simple some other can take advantage of this

Upvotes: 0

Ali Hallaji
Ali Hallaji

Reputation: 4392

For using another docker-compose network you just do these(to share networks between docker-compose):

  1. Run the first docker-compose project by up -d
  2. Find the network name of the first docker-compose by: docker network ls(It contains the name of the root directory project)
  3. Then use that name by this structure at below in the second docker-compose file.

second docker-compose.yml

version: '3'
services:
  service-on-second-compose:  # Define any names that you want.
    .
    .
    .
    networks:
      - <put it here(the network name that comes from "docker network ls")>

networks:
  - <put it here(the network name that comes from "docker network ls")>:
    external: true

Upvotes: 5

Pedro
Pedro

Reputation: 1

version: '2'
services:
  bot:
    build: .
    volumes:
      - '.:/home/node'
      - /home/node/node_modules
    networks:
      - my-rede
    mem_limit: 100m
    memswap_limit: 100m
    cpu_quota: 25000
    container_name: 236948199393329152_585042339404185600_bot
    command: node index.js
    environment:
      NODE_ENV: production
networks:
  my-rede:
    external:
      name: name_rede_externa

Upvotes: -2

dedek
dedek

Reputation: 8311

All containers from api can join the front default network with following config:

# api/docker-compose.yml

...

networks:
  default:
    external:
      name: front_default

See docker compose guide: using a pre existing network (see at the bottom)

Upvotes: 48

emyller
emyller

Reputation: 2746

Since Compose 1.18 (spec 3.5), you can just override the default network using your own custom name for all Compose YAML files you need. It is as simple as appending the following to them:

networks:
  default:
    name: my-app

The above assumes you have version set to 3.5 (or above if they don't deprecate it in 4+).

Other answers have pointed the same; this is a simplified summary.

Upvotes: 17

Daniel Blanco
Daniel Blanco

Reputation: 537

The previous posts information is correct, but it does not have details on how to link containers, which should be connected as "external_links".

Hope this example make more clear to you:

  • Suppose you have app1/docker-compose.yml, with two services (svc11 and svc12), and app2/docker-compose.yml with two more services (svc21 and svc22) and suppose you need to connect in a crossed fashion:

  • svc11 needs to connect to svc22's container

  • svc21 needs to connect to svc11's container.

So the configuration should be like this:

this is app1/docker-compose.yml:


version: '2'
services:
    svc11:
        container_name: container11
        [..]
        networks:
            - default # this network
            - app2_default # external network
        external_links:
            - container22:container22
        [..]
    svc12:
       container_name: container12
       [..]

networks:
    default: # this network (app1)
        driver: bridge
    app2_default: # external network (app2)
        external: true

this is app2/docker-compose.yml:


version: '2'
services:
    svc21:
        container_name: container21
        [..]
        networks:
            - default # this network (app2)
            - app1_default # external network (app1)
        external_links:
            - container11:container11
        [..]
    svc22:
       container_name: container22
       [..]

networks:
    default: # this network (app2)
        driver: bridge
    app1_default: # external network (app1)
        external: true

Upvotes: 30

Nauraushaun
Nauraushaun

Reputation: 1673

I would ensure all containers are docker-compose'd to the same network by composing them together at the same time, using:

docker compose --file ~/front/docker-compose.yml --file ~/api/docker-compose.yml up -d

Upvotes: 5

Related Questions