Reputation: 1373
I would like to produce messages from a container A to a Kafka topic in a container B, but I am facing some weird issues with the networking of these containers. Do you have any idea on how I can connect those containers in a proper way? The problem is that the collector service cannot see the kafka from the other container and cannot add messages to it. More specifically I have the services below:
docker-compose.yml
version: '3.5'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ADVERTISED_HOST: zookeeper
ADVERTISED_PORT: 2181
extra_hosts:
- "moby:127.0.0.1"
networks:
- meetup-net
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
extra_hosts:
- "moby:127.0.0.1"
networks:
- meetup-net
collector:
image: collector:v1
environment:
- kafka-bootstrap-servers=docker_kafka_1.docker_meetup-net
restart: always
depends_on:
- kafka
networks:
- meetup-net
networks:
meetup-net:
driver: bridge
and on the other side I have the application.conf file
streaming {
window-size = 50
window-interval = 5
kafka-bootstrap-servers = ${?kafka-bootstrap-servers}
kafka-bootstrap-servers = "localhost:9092"
sink-topic = ${?source-topic}
sink-topic = "meetup"
key-value-json-path = ${key-value-json-path}
key-value-json-path = "./data/keyvalue"
source-topic-checkpoint-location = ${source-topic-checkpoint-location}
source-topic-checkpoint-location = "./target/source-topic"
sink-topic-checkpoint-location = ${sink-topic-checkpoint-location}
sink-topic-checkpoint-location = "./target/sink-topic"
}
zookeeper.server = ${?zookeeper-server}
zookeeper.server = "localhost:2181"
Upvotes: 1
Views: 962
Reputation: 32100
You need to set KAFKA_ADVERTISED_LISTENERS
correctly.
At the moment KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
means that any client connecting to the broker will get localhost
as the broker address on which to connect for subsequent requests.
Unless the client is running on the broker (which it isn't here) then you need to change this configuration. For running in a self-contained Docker env this is easy enough:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
Now any client connection should be to kafka:29092
. This also means that you can connect a client running on your Docker host to the Kafka broker which can be useful e.g. when running on a laptop and running a client locally.
Here is a sample Docker Compose showing this in action.
Upvotes: 3