Reputation: 151
I have installed confluent platform in Ubuntu 16.04 machine and initially i have configured zookeeper, Kafka and ksql and started confluent platform. i am able to see the below message.
root@DESKTOP-DIB3097:/opt/kafkafull/confluent-5.1.0/bin# ./confluent start
This CLI is intended for development only, not for production
https://docs.confluent.io/current/cli/index.html
Using CONFLUENT_CURRENT: /tmp/confluent.HUlCltYT
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
control-center is [UP]
now everything is up, when i checked status of the confluent platform, i observed that Schema registry, connect & control-center are down.
i have checked the logs of schema registry and found out the below log.
ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:210)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:61)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:72)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:39)
at io.confluent.rest.Application.createServer(Application.java:201)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:41)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:137)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:208)
... 5 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:422)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:275)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:135)
... 6 more
Caused by: java.util.concurrent.TimeoutException: Timeout after waiting for 60000 ms.
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:78)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:417)
... 8 more
Upvotes: 5
Views: 11757
Reputation: 561
In $CONFLUENT_HOME/etc/kafka
, you'll see server.properties.
Uncomment the following and update as below
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://localhost:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
In $CONFLUENT_HOME/etc/schema-registry
, you'll see schema-registry.properties, open and update as below
listeners=http://0.0.0.0:9092
Upvotes: 7
Reputation: 151
I think, I've found the answer,
In Kafka configuration files add the property host.name=host_ip_address
which will act as Kafka broker host. so in all the configuration files where ever Kafka bootstrap property comes, change it to the respective host name or IP address as shown below.
bootstrap.servers=192.168.0.193:9092
Example : In Schema Registry configurations, I have changed the below property from local host to respective IP address
kafkastore.bootstrap.servers=PLAINTEXT://192.168.0.193:9092 ##
In other files check the property bootstrap.servers=192.168.0.193:9092
referring to correctly or not.
And also check if schema registry configuration file are referring correctly or not.
(you can actually check and compare configuration files in /tmp/confluent
kafka logs)
Now after changing all the configurations files,services are up and running.
Upvotes: 1