Reputation: 599
Let's say I have an application which consumes logs from kafka cluster. I want the application to periodically check for the availability of the cluster and based on that perform certain actions. I thought of a few approaches but was not sure which one is better or what is the best way to do this:
Which of these methods is more preferable and why?
Upvotes: 1
Views: 6118
Reputation: 2014
Going down a slightly different route here, you could potentially poll zookeeper (znode path - /brokers/ids) for this information by using the Apache Curator library.
Here's an idea that I tried and worked - I used the Curator's Leader Latch recipe for a similar requirement.
You could create an instance of LeaderLatch and invoke the getLeader() method. If at every invocation, you get a leader then it is safe to assume that the cluster is up and running otherwise there is something wrong with it.
I hope this helps.
EDIT: Adding the zookeeper node path where the leader information is stored.
Upvotes: 1