Thanos
Thanos

Reputation: 3665

Docker / Kafka connect two different containers

I am using wurstmeister's docker-kafka project to run kafka/zookeeper in a container. I docker-compose up the containers using localhost as the KAFKA_ADVERTISED_HOST_NAME: localhost variable.

I have written a Java app that uses flink to connect and consume one of the topics of this Kafka container. If I export a runnable jar and run it from my machine, it works absolutely fine. When I create the following image to run the jar from another docker container I receive an exception (about 30 seconds after execution) Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata which I believe has to do with my Java program not being able to communicate with the Kafka server running in the other container.

Here's my Java app's dockerfile:

# Dockerfile

FROM anapsix/alpine-java

MAINTAINER myself myself

COPY myApp.jar /home/myApp.jar

CMD ["java","-jar","/home/myApp.jar"]

And here's the docker-compose.yml for wurstmeister's docker kafka:

version: "3.5"

networks:
  myNetwork:
    name: myNetwork
    driver: bridge

services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
    networks:
      - myNetwork
  kafka:
    build: .
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: localhost
      KAFKA_ADVERTISED_PORT: "9092"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
      ALLOW_PLAINTEXT_LISTENER: "yes"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - myNetwork
  auth_analytics:
      build: 
        context: .
        dockerfile: auth_dockerfile
      depends_on:
        - kafka
      networks:
        - myNetwork

I have tried multiple versions of the above. Initially I didn't setup any network which I thought it might be the problem, however, the above that creates a network doesn't make any difference. I tried "localhost:9092" as the bootstrap server in my Java app and "myNetwork:9092" as I read online.

I also read this FAQ from wurstmeister about connectivity but I don't see something wrong with my setup.

I've also tried running my Java app image without using the docker-compose (I ran the docker-compose to start zookeeper/kafka and then I did a docker build, docker run on the image for my Java app. This made no difference.)

I am stuck. What am I doing wrong?

Upvotes: 2

Views: 4645

Answers (2)

Robin Moffatt
Robin Moffatt

Reputation: 32100

You need to set the KAFKA_ADVERTISED_LISTENERS correctly. At the moment it is localhost which means that any client connecting will be told by the broker that the broker is on localhost—and when the client tries to connect to that, it will fail (unless Kafka broker is actually available on localhost, which it won't be on a Docker container just running your app).

The solution is to define listeners such that they can be addressed from any broker and client location that needs it. My preferred approach is this, where you have one for intra-container communication, and one for communication in from the host:

  KAFKA_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
  KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
  KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
  KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092

Your clients on the Docker network use kafka:29092 to connect; clients on the host machine connect on localhost:9092 (and make sure you expose 9092 through Docker to the host machine).

To understand more about this, see https://rmoff.net/2018/08/02/kafka-listeners-explained/

BTW I would strongly recommend fixing this the proper way; overriding the /etc/hosts file is a hack that doesn't address the actual issue IMO.

Upvotes: 5

Jakub Bujny
Jakub Bujny

Reputation: 4628

KAFKA_ADVERTISED_HOST_NAME: localhost

is what your client will receive as connection string to Kafka. It means that container with your app will try to reach Kafka at localhost where localhost is local network stack for container with your app.

To solve that problem you should use kafka host as your KAFKA_ADVERTISED_HOST_NAME and bootstrap servers - that will allow your container to connect to kafka inside dockers but will break configuration when you try to connect to Kafka by running on your PC java -jar...

To fix that problem make entry in your hosts file on your PC with mapping kafka -> localhost.

Upvotes: 1

Related Questions