Prakash P
Prakash P

Reputation: 4058

Kafka offset not incremented

I am using Kafka with Spring-boot:

Kafka Producer class:

@Service
public class MyKafkaProducer {

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    private static Logger LOGGER = LoggerFactory.getLogger(NotificationDispatcherSender.class);

    // Send Message
    public void sendMessage(String topicName, String message) throws Exception {
        LOGGER.debug("========topic Name===== " + topicName + "=========message=======" + message);
        ListenableFuture<SendResult<String, String>> result = kafkaTemplate.send(topicName, message);
        result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
            @Override
            public void onSuccess(SendResult<String, String> result) {
                LOGGER.debug("sent message='{}' with offset={}", message, result.getRecordMetadata().offset());
            }

            @Override
            public void onFailure(Throwable ex) {
                LOGGER.error(Constants.PRODUCER_MESSAGE_EXCEPTION.getValue() + " : " + ex.getMessage());
            }
        });
    }
}

Kafka-configuration:

spring.kafka.producer.retries=0
spring.kafka.producer.batch-size=100000
spring.kafka.producer.request.timeout.ms=30000
spring.kafka.producer.linger.ms=10
spring.kafka.producer.acks=0
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.max.block.ms=5000
spring.kafka.bootstrap-servers=192.168.1.161:9092,192.168.1.162:9093

Problem:

I have 5 partitions of a topic let's say my-topic.

What happens is, I get success(i.e message is sent to Kafka successfully) logs but offset of none partition of topic my-topic get incremented.

As you can see above, I have added logs onSuccess and onFailure. What I expect is, when Kafka is not able to send a message to Kafka I should get an error, but I do not receive any error message in this case.

The above behavior of Kafka happens at a ratio of 100: 5 (i.e after every 100 successful message sent to kafka).

Edit1: Adding Kafka producer logs for a successful case(i.e successfully received message on consumer side)

ProducerConfig - logAll:180] ProducerConfig values: 
    acks = 0
    batch.size = 1000
    block.on.buffer.full = false
    bootstrap.servers = [10.20.1.19:9092, 10.20.1.20:9093, 10.20.1.26:9094]
    buffer.memory = 33554432
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 10
    max.block.ms = 5000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.fetch.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.ms = 50
    request.timeout.ms = 60000
    retries = 0
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    timeout.ms = 30000
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

2017-10-24 14:30:09, [INFO] [karma-unified-notification-manager - ProducerConfig - logAll:180] ProducerConfig values: 
    acks = 0
    batch.size = 1000
    block.on.buffer.full = false
    bootstrap.servers = [10.20.1.19:9092, 10.20.1.20:9093, 10.20.1.26:9094]
    buffer.memory = 33554432
    client.id = producer-1
    compression.type = none
    connections.max.idle.ms = 540000
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 10
    max.block.ms = 5000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.fetch.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.ms = 50
    request.timeout.ms = 60000
    retries = 0
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    timeout.ms = 30000
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

Upvotes: 1

Views: 2788

Answers (2)

subzero
subzero

Reputation: 366

It is not showing the errors because you have set spring.kafka.producer.acks as 0. Set it as 1 and your callback function should work. Then you can see if the offset is getting incremented or not.

Upvotes: 2

Gary Russell
Gary Russell

Reputation: 174484

Your code works fine for me...

@SpringBootApplication
public class So46892185Application {

    public static void main(String[] args) {
        SpringApplication.run(So46892185Application.class, args);
    }

    private static final Logger LOGGER = LoggerFactory.getLogger(So46892185Application.class);

    @Bean
    public ApplicationRunner runner(KafkaTemplate<String, String> template) {
        return args -> {
            for (int i = 0; i < 10; i++) {
                send(template, "foo" + i);
            }
        };
    }

    public void send(KafkaTemplate<String, String> template, String message) {
        ListenableFuture<SendResult<String, String>> result = template.send(topic().name(), message);
        result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {

            @Override
            public void onSuccess(SendResult<String, String> result) {
                LOGGER.info("sent message='{}'"
                        + " to partition={}"
                        + " with offset={}", message, result.getRecordMetadata().partition(),
                        result.getRecordMetadata().offset());
            }

            @Override
            public void onFailure(Throwable ex) {
                LOGGER.error("Ex : " + ex.getMessage());
            }

        });
    }

    @Bean
    public NewTopic topic() {
        return new NewTopic("so46892185-3", 5, (short) 1);
    }

}

Result

2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo3' to partition=1 with offset=0
2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo8' to partition=1 with offset=1
2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo1' to partition=2 with offset=0
2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo6' to partition=2 with offset=1
2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo0' to partition=0 with offset=0
2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo5' to partition=0 with offset=1
2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo4' to partition=3 with offset=0
2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo9' to partition=3 with offset=1
2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo2' to partition=4 with offset=0
2017-10-23 11:12:05.907  INFO 86390 --- [ad | producer-1] com.example.So46892185Application        
: sent message='foo7' to partition=4 with offset=1

Upvotes: 1

Related Questions