Reputation: 3109
I have been trying to deploy Kafka with schema registry locally using Kubernetes. However, the logs of the schema registry pod show this error message:
ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
What could be the reason of this behavior? ' In order to run Kubernetes locally, I user Minikube version v0.32.0 with Kubernetes version v1.13.0
My Kafka configuration:
apiVersion: v1
kind: Service
metadata:
name: kafka-1
spec:
ports:
- name: client
port: 9092
selector:
app: kafka
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-1
spec:
selector:
matchLabels:
app: kafka
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: kafka
server-id: "1"
spec:
volumes:
- name: kafka-data
emptyDir: {}
containers:
- name: server
image: confluent/kafka:0.10.0.0-cp1
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-1:2181
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-1
- name: KAFKA_BROKER_ID
value: "1"
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /var/lib/kafka
name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
name: schema
spec:
ports:
- name: client
port: 8081
selector:
app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-schema-registry
spec:
replicas: 1
selector:
matchLabels:
app: kafka-schema-registry
template:
metadata:
labels:
app: kafka-schema-registry
spec:
containers:
- name: kafka-schema-registry
image: confluent/schema-registry:3.0.0
env:
- name: SR_KAFKASTORE_CONNECTION_URL
value: zookeeper-1:2181
- name: SR_KAFKASTORE_TOPIC
value: "_schema_registry"
- name: SR_LISTENERS
value: "http://0.0.0.0:8081"
ports:
- containerPort: 8081
Zookeeper configuraion:
apiVersion: v1
kind: Service
metadata:
name: zookeeper
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-1
spec:
ports:
- name: client
port: 2181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-1
spec:
selector:
matchLabels:
app: zookeeper
server-id: "1"
replicas: 1
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: elevy/zookeeper:v3.4.7
env:
- name: MYID
value: "1"
- name: SERVERS
value: "zookeeper-1"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: /zookeeper/data
name: data
- mountPath: /zookeeper/wal
name: wal
Upvotes: 12
Views: 62169
Reputation: 11
Zookeeper session timeout occurs due to long Garbage Collection processes. So, I was facing same issue in my local. So check in your config folder server.properties file will there. Increase the size of below value zookeeper.connection.timeout.ms=18000
Upvotes: 0
Reputation: 106
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
In my case, the value of Kafka.consumer.stream.host
in the application.properties
file was not correct, this value should be in the right format according to the environment.
Upvotes: 0
Reputation: 597
I have faced the same issue even though all the SSL config, topics are created. After long research, I have enabled the spring debug logs. The internal error is org.springframework.jdbc.CannotGetJdbcConnectionException. When I checked in other thread, they said about Spring Boot and Kafka dependency mismatch can cause the Timeout exception. So I have upgraded Spring Boot from 2.1.3 to 2.2.4. Now there is no error and kafka connection is successful. Might be useful to someone.
Upvotes: 1
Reputation: 21
For others who might face this issue, it may happen because topics are not created on the kafka broker machine. So ensure to create appropriate Topics on server as mentioned in your codebase.
Upvotes: 0
Reputation: 91
Kafka fetch topics metadata fails due to 2 reasons:
Reason 1 If the bootstrap server is not accepting your connections this can be due to some proxy issue like a VPN or some server level security groups.
Reason 2: Mismatch in security protocol where the expected can be SASL_SSL and the actual can be SSL. or the reverse or it can be PLAIN.
Upvotes: 4
Reputation: 7109
One time I fixed this issue by restarting my machine but it happened again and I didn't want to restart my machine, so I fixed it with this property in the server.properties file
advertised.listeners=PLAINTEXT://localhost:9092
Upvotes: 6
Reputation: 603
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
can happen when trying to connect to a broker expecting SSL connections and the client config did not specify;
security.protocol=SSL
Upvotes: 20