Reputation: 13397
I have an application - more like a utility - that sits in a corner and updates two different databases periodically.
It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring.
Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other.
The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans.
What is the 'best' way to put transactionality around the database updates?
I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good.
Upvotes: 38
Views: 52879
Reputation: 429
Assuming that you have a web server with two databases at different locations. The web server starts a translation.
In these scenarios, database clustering gives us solutions but generally, we have two types of databases in clustering.
A mast node
And some slave nodes.
The only node that has permission to write is the master node which is a single source of truth and the slave nodes will only read data from the master node. So the solution will answer this problem.
In case any bad things happen to the master node, the fourth principle of ACID means durability. The database has been lost and we can agree that nothing will be recorded on it. After a while one of the slave nodes will get promoted and stand as a new master node and do the job. But we should know that the transaction in the dead database has been lost and the last state of the database is before that transaction gets started. Also, there are some other solutions to record all events on the database somewhere for these kinds of scenarios which might be helpful.
Upvotes: 0
Reputation: 328556
The best way to distribute transactions over more than one database is: Don't.
Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese).
Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost?
The best way is to copy the data to an "import" table. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can abort, rollback and try again later. Therefore, this is a plain old single source transaction.
After you have copied the data, process it locally by reading it from the import table. Again, you will need some mechanism to determine which data you have already seen. But this time, you're working only on data that is inside a single database.
Upvotes: 47
Reputation: 509
For those suggesting concerns with two-phase commit can be waved away because it's widely used in practice, I suggest looking at this: https://en.wikipedia.org/wiki/Two-phase_commit_protocol. There's a link at the bottom of the 2PC article to an article on three-phase commit(!)
Some excerpts from the article on 3PC:
In computer networking and databases, the three-phase commit protocol (3PC)[1] is a distributed algorithm which lets all nodes in a distributed system agree to commit a transaction. It is a more failure-resilient refinement of the two-phase commit protocol (2PC).
Three-phase commit assumes a network with bounded delay and nodes with bounded response times; In most practical systems with unbounded network delay and process pauses, it cannot guarantee atomicity.
To summarize:
Upvotes: 1
Reputation:
Setup a transaction manager in your context. Spring docs have examples, and it is very simple. Then when you want to execute a transaction:
try {
TransactionTemplate tt = new TransactionTemplate(txManager);
tt.execute(new TransactionCallbackWithoutResult(){
protected void doInTransactionWithoutResult(
TransactionStatus status) {
updateDb1();
updateDb2();
}
} catch (TransactionException ex) {
// handle
}
For more examples, and information perhaps look at this: XA transactions using Spring
Upvotes: 8
Reputation: 233
You could try Spring ChainedTransactionManager - http://docs.spring.io/spring-data/commons/docs/1.6.2.RELEASE/api/org/springframework/data/transaction/ChainedTransactionManager.html that supports distributed db transaction. This could be a better alternative to XA
Upvotes: 3
Reputation: 8169
In this case you would need a Transaction Monitor (server supporting XA protocol) and make sure your databases supports XA also. Most (all?) J2EE servers comes with Transaction Monitor built in. If your code is running not in J2EE server then there are bunch of standalone alternatives - Atomicos, Bitronix, etc.
Upvotes: 3
Reputation: 403441
When you say "two different databases", do you mean different database servers, or two different schemas within the same DB server?
If the former, then if you want full transactionality, then you need the XA transaction API, which provides full two-phase commit. But more importantly, you also need a transaction coordinator/monitor which manages transaction propagation between the different database systems. This is part of JavaEE spec, and a pretty rarefied part of it at that. The TX coordinator itself is a complex piece of software. Your application software (via Spring, if you so wish) talks to the coordinator.
If, however, you just mean two databases within the same DB server, then vanilla JDBC transactions should work just fine, just perform your operations against both databases within a single transaction.
Upvotes: 6