Reputation: 43
I am using spring-kafka RetryableTopic for non-blocking retries with a fixed BackOff and a single retry topic.
I noticed that if I use more than 127 attempts the retry never stops and also if I use this header:
@Header(name = RetryTopicHeaders.DEFAULT_HEADER_ATTEMPTS, required = false) int attempt
It overflows to 0 after 127.
I couldn't find any limit for max_retries in spring-kafka documentation, but in the source code I see that it takes only first byte.
Is it a bug or feature? Are there plans to support max_attempts more than 127?
Upvotes: 1
Views: 1142
Reputation: 1
private int getAttempts(ConsumerRecord<?, ?> consumerRecord) {
Header header = consumerRecord.headers().lastHeader(RetryTopicHeaders.DEFAULT_HEADER_ATTEMPTS);
return header != null ? header.value()[header.value().length-1] : 1;
}
Upvotes: 0
Reputation: 121552
Please, raise a GH issue against spring-kafka
project.
The bug is here in the DeadLetterPublishingRecovererFactory
:
private int getAttempts(ConsumerRecord<?, ?> consumerRecord) {
Header header = consumerRecord.headers().lastHeader(RetryTopicHeaders.DEFAULT_HEADER_ATTEMPTS);
return header != null
? header.value()[0]
: 1;
}
The header.value()
is a byte[]
array from the:
headers.add(RetryTopicHeaders.DEFAULT_HEADER_ATTEMPTS,
BigInteger.valueOf(attempts + 1).toByteArray());
But that [0]
takes really only one byte from the array and cast it to int
.
Upvotes: 0