Reputation: 405
I use these code to combine JPA and kafka-producer transaction:
val template = TrasactionTemplate(txnManager) // JpaTransactionManager
template.execute {
dbrepo.save(record)
myproducer.send(record)
}
If the code throws any exception before it exits the execute scope{}, both db and kafka can be rollback.
If during the commit stage, db is committed first but kafka get ProducerFencedException
or other node disconnnected
error, kafka won't produce the event, however, db can't be rollbacked(the record is already stored in table)
My question is:
ChainedKafkaTransactionManager
to solve it, will it work?ChainedKafkaTransactionManager
depreciated, so what is the right way to do it?Upvotes: 1
Views: 663
Reputation: 191743
The recommend approach, without needing Spring features, would be always write to the system that's more highly available. In most cases, that's Kafka (can still be transactional Kafka producer, but you don't need JPA for that). Then you write a Kafka consumer, in the app, or elsewhere (such as Kafka Connect) that consumes those records and updates the database.
Alternatively, if the database is more highly available, use CDC or the "outbox pattern" along with tools such as Debezium to read database events into Kafka.
Upvotes: 1