Zack
Zack

Reputation: 179

Why can't my two docker containers communicate even though they are both responding seperatly?

I know this question has been asked in various ways already, but so far none of the existing answers seem to work as they all reference docker-compose which I'm already using.

I'm trying to start a multi-container service (locally for now). One is a web frontend container running flask and exposing port 5000 (labeled 'web_page' in my docker-compose file). The other container is a text generation model (labeled "model" in my docker-compose file).

Here is my docker-compose.yml file:

version: '3'
services:
  web_page:
    build: ./web_app
    ports:
      - "5000:5000"
  model:
    build: ./gpt-2-cloud-run
    ports:
      - "8080:8080"

After I run docker-compose up and I use a browser (or postman) and go to 0.0.0.0:5000 or 0.0.0.0:8080 I get back a response and it shows exactly what I expect to get back. So both services are up and running and responding on the correct ip/port. But when I click "submit" on the web_page to send the request to the 'model" I get a connection error even though both ip/ports are responding if I test them.

If I run the 'model' container as a stand alone container and just start up the web_page app NOT in a container it works fine. When I put BOTH in containers the web_page immediately gives me

requests.exceptions.ConnectionError

Within the web_page.py code is:

requests.post('http://0.0.0.0:8080',json={'length': 100, 'temperature': 0.85,"prefix":question})

which goes out to that IP with the payload and receives the response back. Again, this works fine when the 'model' is running in a container and has port 8080:8080 mapped. When the web_page is running in the container it can't reach the model endpoint for some reason. Why would this be and how could I fix it?

Upvotes: 1

Views: 1469

Answers (2)

Zack
Zack

Reputation: 179

Elements of the other answers are correct but there are a couple points that are missing or were assumed in the other answers but not were not made explicit.

  1. According to the docker documentation the default bridge network created will NOT provide dns resolution to the image name; only to ip addresses of other containers https://docs.docker.com/network/bridge/#differences-between-user-defined-bridges-and-the-default-bridge

So, my final compose file was:

version: '3'
services:
  web_page:
    build: ./web_app
    ports:
      - "5000:5000"
    networks: 
      - bot-net
    depends_on:
      - model

  model:
    image: sports_int_bot_api_model
    networks: 
      - bot-net

networks:
  bot-net:
    external: true

After I created a 'bot-net' network first on the CLI. I don't know that that is necessarily what has to be done, perhaps you can create a non-default bridge network in the docker-compose file as well. But it does seem, that you cannot use the default bridge network created and resolve an image name (per the docs)

  1. The final endpoint that I pointed to is:

    'http://model:8080'

I suppose this was alluded too in the other answers but they omitted the need to include the 'http' section. This also is not shown in the docs, where they use the name of the image in-place of http as in the docker example they use postgres://db:5432 https://docs.docker.com/compose/networking/

Upvotes: 0

Rahul
Rahul

Reputation: 29

Looks like you're using the default network that gets spun up by docker-compose (so it'll be named something like <directory-name_default>). If you switch your base URL for the requests to the host name of the backend docker container (so model rather than 0.0.0.0), your requests should be able to succeed. Environment variables are good here.

Btw incase you weren't aware you don't need to expose the backend application if you only ever intend on it being access by the frontend one. They both sit in the same docker network so they'll be able to talk to one another.

Upvotes: 2

Related Questions