ScipioAfricanus
ScipioAfricanus

Reputation: 1767

3 Server Kafka Cluster write to 3 instances of Logstash

I currently have a kafka cluster of 3 servers with this set up:

bin/kafka-topics.sh --create --zookeeper server1.com:2181, 
server2.com:2181,server3.com:2181 --replication-factor 3 --partitions 1 --topic kafkatest3

I posted this command in cmd line on server1 and got confirmation that the topic is running. I have one instance of logstash running on each server with this config

input {
            kafka
            {
                    bootstrap_servers => "server1.com:2181,server2.com:2181,server3.com:2181"
                    topics => "kafkatest3"
                    consumer_threads => 3
                    #group_id => "logstash"
            }
    }
    output
    {
            syslog
            {
                    host => ["syslogserver.com"]
                    port => 514
            }
    }

What I keep seeing consistently with this config is that only one instance of logstash appears to be writting to syslog. The other two sit there idly.

Is there a way to force each logstash into action? Is my # of partitions/# of consumer threads correct?

Thanks, Karan

Upvotes: 0

Views: 1247

Answers (1)

Sönke Liebau
Sönke Liebau

Reputation: 1973

Kafka only allows one consumer at a time to read from any given partition. You created your topic with only one partition, so the maximum number of consumer that will be able to read from that topic (for a consumer group) is one - which is what you are seeing.

If you kill the Logstash that is writing data you should see one of the other two picking up and processing data.

To get all three to get a fair share of the data you need to change your topic to have at least three partitions.

Upvotes: 2

Related Questions