Reputation: 4919
I'm trying to build a system which follows the repository and unit of work patterns to allow persistence ignorance/unit testing, etc etc. I'm looking for advice on dealing with Rollback. Ideally I want to use POCO's but I think I might need to at least implement an interface to provide a few bits and pieces.
So lets say we have two repositories, one context/unit of work.
I add one item, amend another item and delete a third item. Repeat for second repository, then I call rollback.
In the past I've used something akin to a DataSet for this. Each object has a state of pendingNew, pendingAmended, pendingDeleted, clean. There is also a copy last persisted version of the object for rollback.
How would you implement this?
EDIT:
Ok, here's what I think I'm actually trying to get my head around. Prepare to be patterned :)
Ultimately the project is WPF MVVM. So we're looking at the Model to what ever the store is here.
I think I've been trying to conflate the model with the idea of repository, where as I think the model should use the UOW and Repositories to provide the features the model needs to provide. Does that sound better?
I want complete persistence ignorance, so imagine my domain includes a Customer, an Order and OrderLines.
The GUI let's say has one button new order which allows the user to fill in Customer details, Order details and 1-n OrderLine details. He hits save and they go to the database, he hits cancel they don't.
So, in this case the model might ask the CustomerRepository for a customer, then the OrderRepository for a new Order, then the OrderLineRepository for each new Line, then tell the Unit of Work to save them.
Does that sound reasonable? It does to me, I think it's where the separation is defined. I'm half tempted to have another API between the model and the repositories. No, that's silly.
EDIT 2: This is an excellent article that has sort of helped a little.
Upvotes: 5
Views: 8230
Reputation: 12075
It's difficult to say for sure without more detail, but I'd look into implementing from the IDbConnection
interface and associated interfaces. It gives you an interface most C# coders with any experience will be familiar with.
Under the hood, to be honest, it really depends on the efficiency if your storage mechanism. If it handles lots of changes efficiently, then you're probably better off having your rollback mechanism build a list of actions that it should take to do a rollback, which is discarded on a 'commit'. If on the other hand updates are expensive, then have you transaction mechanism maintain a list of actions to do on a commit, which are discarded on a rollback. You also need to think about whether or not other code should see updates prior to a commit; with the former approach they will, with the latter, they won't.
Upvotes: 2
Reputation: 27813
I designed my unit of work and repository classes similar to how it's described here on MSDN. The basic idea of the IUnitOfWork
class is that it handles all the database work itself.
I then added (to my IUnitOfWork
class and implementations) a BeginTransaction()
method, which opens a TransactionScope()
object, and then added a EndTransaction(bool commit)
method. This method handles closing the transaction, either by committing the transaction to the database (if true) or rolling back the transaction (if false).
This allows me to do control complicated transactions allowing multiple commits to be rolled back.
Edit: My line of thinking is you want your UnitOfWork object to know about the repositories and not the other way around. This is my opinion, and you'll find people who like the opposite but here's why.
When you want to deal with the database in some way, you want it all constrained by your current unit of work. So to me it makes logical sense to go through the unit of work in order to access your repositories, rather than having your repositories access your unit of work.
It also makes it easier if you need to branch out and do multiple things on different databases (for example, if history data is written to a different database than live data, or if you are doing horizontal database partitioning), since each database would have it's own unit of work. The reason is that if you make repositories know of the unit of work, you need to create one unit of work for each database, plus copy of each repository that you need for each unit of work you may need to access it to.
Finally, keeping access to your repositories as being accessed only through your unit of work keeps the API simple for the developers. For starters, you only need to instantiate 1 object (the unit of work) instead of 1 unit of work object plus however many repository objects you may need. It keeps your code simple (imho) and makes things a bit less error prone for the developers.
Upvotes: 6
Reputation: 56964
I would implement this by using a framework like NHibernate or Entity Framework to do this. :) NHibernate allows you to use POCO's, and does all the plumbing for you already.
Upvotes: 0