Reputation: 587
I build kafka cluster on kubernetes following this guilde https://github.com/kubernetes/contrib/tree/master/statefulsets/kafka, it works well for sending/producing message from kubernetes. when I try to expose the kafka cluster using NodePort as follows, kafka clients try to consumer/produce the message from the address 10.xx.xx.xx:30092
will be failed:
apiVersion: v1
kind: Service
metadata:
name: kafka-nodeport
labels:
app: kafka
spec:
type: NodePort
ports:
- port: 9092
nodePort: 30092
name: server
selector:
app: kafka
Why that happened and How to expose the kafka service?
Upvotes: 0
Views: 8273
Reputation: 685
Just add ClusterIP and Node IP(External IP) to ADVERTISED_LISTENERS.
This is example file yaml deploy kafka in k8s, I use minikube on local with IP is 192.168.49.2
apiVersion: v1
kind: List
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: sp-kafka-1
namespace: sp-share-services
spec:
selector:
matchLabels:
service: sp-kafka-1
template:
metadata:
labels:
service: sp-kafka-1
spec:
containers:
- name: zookeeper
image: confluentinc/cp-zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_TICK_TIME
value: "2000"
- name: sp-kafka-1
image: confluentinc/cp-kafka
ports:
- containerPort: 9092
name: default
- containerPort: 29092
name: service
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: localhost:2181
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://$(POD_IP):9092,DOCKER://$(SP_KAFKA_1_SERVICE_HOST):29092,EXTERNAL://192.168.49.2:31010
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: PLAINTEXT
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_ADVERTISED_HOST_NAME
value: $(POD_IP)
- apiVersion: v1
kind: Service
metadata:
name: sp-kafka-1
namespace: sp-share-services
spec:
ports:
- port: 29092
protocol: TCP
nodePort: 31011
name: service
selector:
service: sp-kafka-1
type: LoadBalancer
$(POD_IP)
is pod id that run internal kafka$(SP_KAFKA_1_SERVICE_HOST)
is ClusterIP/ServiceIP that reference to Ip from kind:Service
. That name depend your name in Service
, example...
- apiVersion: v1
kind: Service
metadata:
name: test-kafka # use env TEST_KAFKA_SERVICE_HOST in Deployment
namespace: sp-share-services
spec:
...
EXTERNAL://192.168.49.2:31010
replace by Node K8s IPFinally, remember add protocol map PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
and you can access kafka from host 192.168.49.2:31011
Upvotes: 0
Reputation: 13459
The Kafka Statefulsets
require headless service for accessing the brokers. You can change the headless service to Type=NodePort and setting the externalTrafficPolicy=Local
. This bypasses the internal load balancing of a Service and traffic destined to a specific node on that node port will only work if a Kafka pod is on that node.
apiVersion: v1
kind: Service
metadata:
name: kafka-nodeport
labels:
app: kafka
spec:
externalTrafficPolicy: Local
type: NodePort
ports:
- port: 9092
nodePort: 30092
name: server
selector:
app: kafka
For example, we have two nodes nodeA and nodeB, nodeB is running a kafka pod. nodeA:30092 will not connect but nodeB:30092 will connect to the kafka pod running on nodeB.
Hope this helps.
Upvotes: 1
Reputation: 296
You have to do two things :
1) create either a service resource of type NodePort (like you did) or an Ingress resource.
2) Set the two properties below in Kafka :
ADVERTISED_HOST
and ADVERTISED_PORT
.
Kafka will register to Zookeeper with the ip adress of the node, inside containers network it's the internal ip adress that it used so we have to set thoses properties to tell Kafka to register them instead.
Hopes that's help!
Upvotes: 1