Anadi Misra
Anadi Misra

Reputation: 2103

Error connecting to Kafka running in docker container

I have configured following Kafka properties for my spring boot based library bundled inside a lib directory of an EAR deployed to Wildfly. I am able to start the spring components successfully by loading the porperty file from classpath (WEB-INF/classes)

spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.gson.GsonAutoConfiguration,org.springframework.boot.autoconfigure.jms.artemis.ArtemisAutoConfiguration,\
org.springframework.boot.autoconfigure.data.web.SpringDataWebAutoConfiguration
spring.kafka.admin.client-id=iris-admin-local
spring.kafka.producer.client-id=iris-producer-local
spring.kafka.producer.retries=3
spring.kafka.producer.properties.max.block.ms=2000
spring.kafka.producer.bootstrap-servers=127.0.0.1:19092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
foo.app.kafka.executor.core-pool-size=10
foo.app.kafka.executor.max-pool-size=500
foo.app.kafka.executor.queue-capacity=1000

I run Kafka and zookeeper via docker compose, and the containers are mapped to host ports 12181 and 19092 respectively. The publish fails with the error

19:37:42,914 ERROR [org.springframework.kafka.support.LoggingProducerListener] (swiftalker-3) Exception thrown when sending a message with key='543507' and payload='com.foo.app.kanban.defect.entity.KanbanDefect@84b13' to topic alm_swift-alm:: org.apache.kafka.common.errors.TimeoutException: Topic alm_swift-alm not present in metadata after 2000 ms.

19:37:43,124 WARN  [org.apache.kafka.clients.NetworkClient] (kafka-producer-network-thread | iris-producer-local-1) [Producer clientId=iris-producer-local-1] Error connecting to node 6be446692a1f:9092 (id: 1001 rack: null): java.net.UnknownHostException: 6be446692a1f
    at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
    at java.net.InetAddress.getAllByName(InetAddress.java:1193)
    at java.net.InetAddress.getAllByName(InetAddress.java:1127)
    at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
    at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
    at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
    at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955)
    at org.apache.kafka.clients.NetworkClient.access$600(NetworkClient.java:73)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1128)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1016)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:547)
    at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:324)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
    at java.lang.Thread.run(Thread.java:748)

Now this is after having provided spring.kafka.producer.bootstrap-servers=127.0.0.1:19092 property. What's interesting though is

CONTAINER ID   NAMES                PORTS                                                                          CREATED          STATUS
2133c81ed51d   mongo                0.0.0.0:23556->27017/tcp, 0.0.0.0:23557->27018/tcp, 0.0.0.0:23558->27019/tcp   29 minutes ago   Up 29 minutes
f18b86d8739e   kafka-ui             0.0.0.0:18080->8080/tcp                                                        29 minutes ago   Up 29 minutes
6be446692a1f   kafka                0.0.0.0:19092->9092/tcp                                                        29 minutes ago   Up 29 minutes
873304e1e6a0   zookeeper            2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp                                         29 minutes ago   Up 29 minutes

the Wildfly server error logs show the app is actually connecting to the docker container via it's container ID i.e.

6be446692a1f   kafka                0.0.0.0:19092->9092/tcp

from the docker ps -a output and

Error connecting to node 6be446692a1f:9092 (id: 1001 rack: null): java.net.UnknownHostException: 6be446692a1f

I'm confused as to how is the spring boot code, despite the config property referring server over localhost and mapped port 19092, is managing to find a docker container on it's ID and default port and then trying to connect to it? How do I fix this?

Update: The docker compose

version: '3'

networks:
  app-tier:
    driver: bridge

services:
  zookeeper:
    image: 'docker.io/bitnami/zookeeper:3-debian-10'
    container_name: 'zookeeper'
    networks:
      - app-tier
    volumes:
      - 'zookeeper_data:/bitnami'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
  kafka:
    image: 'docker.io/bitnami/kafka:2-debian-10'
    container_name: 'kafka'
    ports:
      - 19092:9092
    networks:
      - app-tier
    volumes:
      - 'kafka_data:/bitnami'
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper
  database:
    image: 'mongo'
    container_name: 'mongo'
    environment:
      - MONGO_INITDB_DATABASE='swiftalk_db'
    networks:
      - app-tier
    ports:
      - 23556-23558:27017-27019
    depends_on:
      - kafka
  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    ports:
      - 18080:8080
    networks:
      - app-tier
    volumes: 
      - 'mongo_data:/data/db'
    depends_on:
      - kafka
    environment:
      - KAFKA_CLUSTERS_0_NAME=local
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
      - KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181

volumes:
  zookeeper_data:
    driver: local
  kafka_data:
    driver: local
  mongo_data:
    driver: local

Upvotes: 3

Views: 7510

Answers (4)

Eugene
Eugene

Reputation: 121028

The accepted answer is spot-on, it explains everything in detail.

I faced a huge deal of challenge when using testcontainers, because the exact same set-up was not working for me. And it's basically testcontainers to blame :)

If you look in their org.testcontainers.containers.KafkaContainer, you will see that they already configure a lot of things for us, for example KAFKA_LISTENERS and KAFKA_LISTENER_SECURITY_PROTOCOL_MAP. What is kind of worse, that because the way code is written, you will not be able to override some of the things. For example, this was one set-up that made me really confused:

    /**
     * A KafkaContainer, but with fixed port mappings.
     */
    private static final class FixedPortsKafkaContainer extends KafkaContainer {

        private FixedPortsKafkaContainer(DockerImageName dockerImageName) {
            super(dockerImageName);
        }

        private FixedPortsKafkaContainer configureFixedPorts(int[] ports) {
            for (int port : ports) {
                super.addFixedExposedPort(port, port);
            }
            return this;
        }
    }

and some @Test method:

void test() {
        
    kafkaContainer = new FixedPortsKafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:7.5.1"));

    kafkaContainer.withEnv("KAFKA_ADVERTISED_LISTENERS", "CONNECTIONS_FROM_HOST://localhost:9092");
    kafkaContainer.withEnv("KAFKA_LISTENER_SECURITY_PROTOCOL_MAP", "CONNECTIONS_FROM_HOST:PLAINTEXT");
    kafkaContainer.withEnv("KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR", "1");
    kafkaContainer.withEnv("KAFKA_INTER_BROKER_LISTENER_NAME", "CONNECTIONS_FROM_HOST");

    kafkaContainer.configureFixedPorts(new int[] {9092});
    kafkaContainer.start();

}

will fail with:

ERROR Exiting Kafka due to fatal exception (kafka.Kafka$) org.apache.kafka.common.config.ConfigException: Listener with name CONNECTIONS_FROM_HOST defined in inter.broker.listener.name not found in listener.security.protocol.map.

If you look closer, this makes no sense! It is exactly how it was defined. The problem here is that KAFKA_ADVERTISED_LISTENERS is not something you can easily override with test-containers (it has no effect of me setting it, the value is different in the container). That is when I started reading the source code of test-containers...


The solution is trivial:

spring.kafka.bootstrap-servers=localhost:9093

and

// drop all kafkaContainer.withEnv... from the above example
kafka.configureFixedPorts(new int[]{9092, 9093});

in our case.

Upvotes: 1

saber tabatabaee yazdi
saber tabatabaee yazdi

Reputation: 4959

this answer has two parts: machine: port

machine:

first, use your docker-compose environment variable for

port:

use this: broker:29092 or 19092 instead of 9222 or any

sth like this:

  KAFKA_CLUSTERS_0_NAME: local
  KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: broker:19092

Upvotes: -1

Adrian Moisa
Adrian Moisa

Reputation: 4363

I see that you tried to get Kafka-GUI running. I've wasted a lot of time myself trying to get it started. Thanks to the tutorial bellow now I understand properly how to setup a podman bridge network and how to configure advertised listeners in kafka. I'll list all the docker commands I've used to get a local one node setup running. Note: podman or docker, the commands are the same. Both kafka and kafka-ui are in containers. If you read the tutorial carefully you'll see this matters a lot. If you then want to connect from local app to container kafka you'll need extra settings.

Download Kafka & Zookeeper

  • podman pull bitnami/kafka - Download kafka
  • podman pull zookeeper - Download Zookeeper

Kafka Network

  • podman network create kafka-bridge - Name it as you please

Zookeeper Volumes Needed to persist data. Notice that we are creating volumes for one zookeeper instance. In production we will have more instances.

  • podman volume create zookeeper-dataDir - /dataDir - Snapshot of the in-memory database, current state of the data tree and the content of ephemeral nodes
  • podman volume create zookeeper-dataLogDir - /dataLogDir - Sequential record of all changes made to the ZooKeeper data tree since the last snapshot
  • podman volume create zookeeper-logs - /logs - Server logs, error logs, or diagnostic logs

Kafka Volume

  • podman volume create kafka-data - By mounting a Docker volume to Kafka, the data generated and stored by Kafka will persist even if the container is stopped or removed.

Run Zookeeper Note: we are running zookeeper outside of the any api pods that are defined. We will have more pods all connected to the same kafka server. This one node setup needs to be improved in production to ensure scalability and resilience.

podman run -dt --name zookeeper \
--network=kafka-bridge \
-p 2181:2181 \
-e ZOOKEEPER_CLIENT_PORT=2181 \
-v 'zookeeper-dataDir:/data:Z' \
-v 'zookeeper-dataLogDir:/datalog:Z' \
-v 'zookeeper-logs:/logs:Z' \
docker.io/library/zookeeper

Run Kafka Note: we are running zookeeper outside of the mock-api-pod. Warning! ALLOW_PLAINTEXT_LISTENER=yes is for development env, not production.

podman run -dt --name kafka \
--network=kafka-bridge \
-p 7203:7203 \
-p 9092:9092 \
-v kafka-data:/kafka/data \
-e KAFKA_BROKER_ID=1 \
-e ALLOW_PLAINTEXT_LISTENER=yes \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
docker.io/bitnami/kafka

Create Topic This instruction can be defined in docker-compose or k8 yaml as an init instruction. For now (May 2023) we will manually configure it. Docker compose create kafka topics.

  • podman exec -it kafka sh - Connect to kafka container
  • cd /opt/bitnami/kafka/bin- Cd to bins of kafka
  • kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic api-mock - Create a new topic
  • kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 1 --describe - If you want to double check on the settings. Optional, no need to run this unless you are debugging

Kafka GUI Warning, starting kafka ui with this command does not create a password. In prod this needs to be auth.

  • podman pull provectuslabs/kafka-ui - Get the image
  • Start kafka UI on port 9090 to avoid collisions with anything running on 8080.
podman run -it --name kafka-ui \
--network=kafka-bridge \
-p 9090:8080 \
-e KAFKA_CLUSTERS_0_NAME=local \
-e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 \
-e KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181 \
-e KAFKA_BROKERCONNECT=kafka:9092 \
-e DYNAMIC_CONFIG_ENABLED=true \
provectuslabs/kafka-ui
  • http://localhost:9090/ - Visit the link to use the web gui for kafka

Connecting to Kafka container CLI

  • podman exec -it kafka sh - Connect to kafka container
  • cd /opt/bitnami/kafka/bin- Cd to bins of kafka
  • kafka-topics.sh --bootstrap-server=localhost:9092 --list - List all topics

Inspecting Kafka via local CLI

  • brew install kcat - First install kafkacat (renamed to kcat)
  • kcat -b localhost:9092 -L - List the available topics in the Kafka cluster

enter image description here

Upvotes: 0

Robin Moffatt
Robin Moffatt

Reputation: 32130

You've not shared your Docker Compose so I can't give you the specific fix to make, but in essence you need to configure your advertised listeners correctly. This is the value that the broker provides to the client telling it where to find it when it makes subsequent connections.

Details: https://www.confluent.io/blog/kafka-client-cannot-connect-to-broker-on-aws-on-docker-etc/

Upvotes: 4

Related Questions