Reputation: 2428
I am trying to connect to a kafka docker container from a logstash docker container but I always get the following message:
Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
My docker-compose.yml file is
version: '3.2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
networks:
- elk
depends_on:
- kafka
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000"
- "9600:9600"
links:
- kafka
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
zookeeper:
image: strimzi/kafka:0.11.3-kafka-2.1.0
container_name: zookeeper
command: [
"sh", "-c",
"bin/zookeeper-server-start.sh config/zookeeper.properties"
]
ports:
- "2181:2181"
networks:
- elk
environment:
LOG_DIR: /tmp/logs
kafka:
image: strimzi/kafka:0.11.3-kafka-2.1.0
command: [
"sh", "-c",
"bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}"
]
depends_on:
- zookeeper
ports:
- "9092:9092"
networks:
- elk
environment:
LOG_DIR: "/tmp/logs"
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
networks:
elk:
driver: bridge
volumes:
elasticsearch:
and my logstash.conf file is
input {
kafka{
bootstrap_servers => "kafka:9092"
topics => ["logs"]
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
}
}
All my containers are running normally and I can send messages to Kafka topics outside of the containers.
Upvotes: 3
Views: 2958
Reputation: 32100
You need to define your listener based on the hostname at which it can be resolved from the client. If the listener is localhost
then the client (logstash) will try to resolve it as localhost
from its own container, hence the error.
I've written about this in detail here but in essence you need this:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092, PLAINTEXT://kafka:29092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092, PLAINTEXT://kafka:29092
Then any container on the Docker network uses kafka:29092
to reach it, so logstash config becomes
bootstrap_servers => "kafka:29092"
Any client on the host machine itself continues to use localhost:9092
.
You can see this in action with Docker Compose here: https://github.com/confluentinc/demo-scene/blob/master/build-a-streaming-pipeline/docker-compose.yml#L40
Upvotes: 6
Reputation: 23
You can use the HOST machines IP address for Kafka advertised listeners that way your docker services as well as the services which are running outside your docker network can access it.
KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://$HOST_IP:9092
KAFKA_LISTENERS: PLAINTEXT://$HOST_IP:9092
For reference you can go through this article https://rmoff.net/2018/08/02/kafka-listeners-explained/
Upvotes: 0
Reputation: 3272
The Kafka advertised listers should be defined like this
KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://kafka:9092
KAFKA_LISTENERS: PLAINTEXT://kafka:9092
Upvotes: 0