Reputation: 426
I am using Kafka 0.10.2.0. I have 3 brokers and i am doing some failover tests. Sometimes when one of the kafka brokers is closed ungracefully, I am loosing data. Kafka broker configuration :
zookeeper.connection.timeout.ms=6000
num.partitions=50
min.insync.replicas=2
unclean.leader.election.enable=false
group.max.session.timeout.ms=10000
group.min.session.timeout.ms=1000
Consumer configuration :
props.put(ConsumerConfig.GROUP_ID_CONFIG, getTopicName() + "group");
props.put(ConsumerConfig.CLIENT_ID_CONFIG, getClientId());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, 30000);
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 500);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 3000);
Producer Configuration:
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 1);
props.put(ProducerConfig.CLIENT_ID_CONFIG, getClientId());
props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 800);
props.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 800);
What can I do to stop losing data over kafka brokers?
Upvotes: 1
Views: 5756
Reputation: 8335
Do you have a replication factor of 3 for your topic?
Some great advice from last years Kafka Summit https://www.slideshare.net/ConfluentInc/when-it-absolutely-positively-has-to-be-there-reliability-guarantees-in-kafka-gwen-shapira-jeff-holoman?from_m_app=ios
Video of the session is here https://www.confluent.io/kafka-summit-2016-ops-when-it-absolutely-positively-has-to-be-there/
Upvotes: 1