tawab_shakeel
tawab_shakeel

Reputation: 3749

Docker run image_celery not able to detect redis

I have a django application i want to run redis and celery using docker run command

after I build images using docker-compose file I run two commands on different windows powershell

  1. docker run -it -p 6379:6379 redis
  2. docker run -it image_celery

my celery powershell is not able to detect redis

[2020-02-08 13:08:44,686: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/1: Error -2 connecting to redis:6379. Name or service not known.. Trying again in 2.00 seconds...

version: '3'
services:

  the-redis:
    image: redis:3.2.7-alpine
    ports:
      - "6379:6379"
    volumes:
      - ../data/redis:/data


  celery_5:
    build:
      context: ./mltrons_backend
      dockerfile: Dockerfile_celery
    volumes:
      - ./mltrons_backend:/code
      - /tmp:/code/static

    depends_on:
      - the-redis
    deploy:
      replicas: 4
      resources:
        limits:
          memory: 25g
      restart_policy:
        condition: on-failure

volumes:
  db_data:
    external: true

Dockerfile_celery

FROM python:3.6
ENV PYTHONUNBUFFERED 1


# Install Java
RUN apt-get -y update && \
    apt install -y openjdk-11-jdk && \
    apt-get install -y ant && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/ && \
    rm -rf /var/cache/oracle-jdk11-installer;

ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64/

RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code
ENV REDIS_HOST=redis://the-redis
ENV REDIS_PORT=6379



RUN pip install --upgrade 'sentry-sdk==0.7.10'
ENTRYPOINT celery -A mlbot_webservices worker -c 10 -l info

EXPOSE 8102

celery.py

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mlbot_webservices.settings')

app = Celery('mltrons_training')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()

@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

settings.py

CELERY_BROKER_URL = 'redis://the-redis:6379/'
CELERY_RESULT_BACKEND = 'redis://the-redis:6379/'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'

Upvotes: 0

Views: 523

Answers (1)

davidxxx
davidxxx

Reputation: 131526

It is expected because when you start container as you do (docker run IMAGE), the containers use the default bridge network of Docker.
You can check it by inspecting that network : docker network inspect bridge.
The default bridge doesn't accept network resolution of the containers by container name as you do (redis).
Besides the default name of a container is not the image name but a generated name by docker.
That's why you get that error at runtime :

Cannot connect to redis://redis:6379/1

Note that you can still reference containers belonging to the default bridge by their ip addresses, but that is generally undesirable because that hard code them from the client side.

That works with Docker compose because :

By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

To be able to communicate by container name with docker run, you need :
- to add these containers in the same network but not the default one provided by Docker
- to give an explicit name to the container that you want to reference (while doing it for both container is good to monitor/manager it more simply) by the other.

For example create a user-defined bridge network and add the containers to that when you start them :

docker network create -d bridge my-bridge-network
docker run -it -p 6379:6379 --network=my-bridge-network --name=redis redis
docker run -it --network=my-bridge-network --name=celery image_celery 

Upvotes: 1

Related Questions