Reputation: 24616
I'm using the camel kafka component and I'm unclear what is happening under the hood with committing the offsets. As can be seen below, I'm aggregating records and I think for my use case that it only makes sense to commit the offsets after the records have been saved to SFTP.
Is it possible to manually control when I can perform the commit?
private static class MyRouteBuilder extends RouteBuilder {
@Override
public void configure() throws Exception {
from("kafka:{{mh.topic}}?" + getKafkaConfigString())
.unmarshal().string()
.aggregate(constant(true), new MyAggregationStrategy())
.completionSize(1000)
.completionTimeout(1000)
.setHeader("CamelFileName").constant("transactions-" + (new Date()).getTime())
.to("sftp://" + getSftpConfigString())
// how to commit offset only after saving messages to SFTP?
;
}
private final class MyAggregationStrategy implements AggregationStrategy {
@Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
if (oldExchange == null) {
return newExchange;
}
String oldBody = oldExchange.getIn().getBody(String.class);
String newBody = newExchange.getIn().getBody(String.class);
String body = oldBody + newBody;
oldExchange.getIn().setBody(body);
return oldExchange;
}
}
}
private static String getSftpConfigString() {
return "{{sftp.hostname}}/{{sftp.dir}}?"
+ "username={{sftp.username}}"
+ "&password={{sftp.password}}"
+ "&tempPrefix=.temp."
+ "&fileExist=Append"
;
}
private static String getKafkaConfigString() {
return "brokers={{mh.brokers}}"
+ "&saslMechanism={{mh.saslMechanism}}"
+ "&securityProtocol={{mh.securityProtocol}}"
+ "&sslProtocol={{mh.sslProtocol}}"
+ "&sslEnabledProtocols={{mh.sslEnabledProtocols}}"
+ "&sslEndpointAlgorithm={{mh.sslEndpointAlgorithm}}"
+ "&saslJaasConfig={{mh.saslJaasConfig}}"
+ "&groupId={{mh.groupId}}"
;
}
Upvotes: 2
Views: 7292
Reputation: 422
You can control manual offset commit even in multi threaded route (using aggregator for exemple) by using offset repository (Camel Documentation)
@Override
public void configure() throws Exception {
// The route
from(kafkaEndpoint())
.routeId(ROUTE_ID)
// Some processors...
// Commit kafka offset
.process(MyRoute::commitKafka)
// Continue or not...
.to(someEndpoint());
}
private String kafkaEndpoint() {
return new StringBuilder("kafka:")
.append(kafkaConfiguration.getTopicName())
.append("?brokers=")
.append(kafkaConfiguration.getBootstrapServers())
.append("&groupId=")
.append(kafkaConfiguration.getGroupId())
.append("&clientId=")
.append(kafkaConfiguration.getClientId())
.append("&autoCommitEnable=")
.append(false)
.append("&allowManualCommit=")
.append(true)
.append("&autoOffsetReset=")
.append("earliest")
.append("&offsetRepository=")
.append("#fileStore")
.toString();
}
@Bean(name = "fileStore", initMethod = "start", destroyMethod = "stop")
private FileStateRepository fileStore() {
FileStateRepository fileStateRepository =
FileStateRepository.fileStateRepository(new File(kafkaConfiguration.getOffsetFilePath()));
fileStateRepository.setMaxFileStoreSize(10485760); // 10MB max
return fileStateRepository;
}
private static void commitKafka(Exchange exchange) {
KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class);
manual.commitSync();
}
Upvotes: 1
Reputation: 422
I think this is change in the latest version of camel (2.22.0) (the doc) you should be able to do that.
// Endpoint configuration &autoCommitEnable=false&allowManualCommit=true
public void process(Exchange exchange) {
KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class);
manual.commitSync();
}
Upvotes: 3
Reputation: 55540
No you cannot. Kafka performs an auto commit in the background every X seconds (you can configure this).
There is no manual commit support in camel-kafka. Also this would not be possible as the aggregator is separated from the kafka consumer, and its the consumer that performs the commit.
Upvotes: 1