Reputation: 893
I'm have used WCF services before, and now I have a new project coming up. I am still in the design phase, and I wondered what the best way to handle the following scenario.
I will be having multiple clients connecting at the same time to my WCF service, firing different methods (Operation Contracts) on the service :
A. Some of the methods fired are just pure 'Read' methods (e.g. GetListOfCustomers).
B. Some are the methods fired are complex 'Read' methods (e.g. GetAllProductsByCustomerId). These kind of methods require getting the customer from the DB, checking something on him, and then getting all the products bought by him. (meaning, there are 2 calls to the database in this method).
C. Some are 'Write' methods (e.g. 'RemoveCustomer' or 'SetProductOutOfStock').
My question is - how do I synchronize all these calls so that I don't get concurrency problems?
I don't want the entire service to process calls serially, because it will damage performance on the client side (some calls may require 3-4 seconds to process). So what is my solution ?
Use a 'Single' instance for all clients with 'Multiple' threads and then use a lock object ? Won't this just result to being serial ?
Or do I need a different lock object for the 'read' and a different lock object for the 'write' ?
Or do I need a lock for the 'write' functions and some other thing for the 'read' functions ?
This is my first question ever on StackOverflow. Thanks for anyone who can help !
Update : I am going to use 'Linq-To-SQL' as my ORM.
Upvotes: 4
Views: 1949
Reputation: 1714
My question is - how do I synchronize all these calls so that I don't get concurrency problems?
Why would you need to synchronize anything? The DBMS can handle synchronization quite well. Of course, if you know that you will have lots of writes that are going to degrade your read performance because of locks then you will have to plan for that on an architectural level, but it has not much to do with WCF. In that case, as hugh wrote, CQRS may be a fitting choice, though it is hard to say without the spec at hand.
I don't want the entire service to process calls serially, because it will damage performance on the client side (some calls may require 3-4 seconds to process). So what is my solution ?
Then use PerCall instancing and either Single or Multiple concurrency if you can. Have a look here for details about instancing/concurrency combinations.
Use a 'Single' instance for all clients with 'Multiple' threads and then use a lock object ? Won't this just result to being serial ?
Single instancing is going to give you issues with concurrency, see the article I linked above. It is mostly useful if you need a Singleton service.
UPDATE
In answer to your comment: I'm afraid I still do not understand where you expect to have concurrency issues. If you are worrying about the WCF service, just avoid using Single instancing: the most scalable option is PerCall/Multiple.
If you are asking yourself if you need to consider CQRS, feel free to read the article by Udi Dahan that I linked in another comment. Keep in mind, though, that CQRS takes some time to grok and adds complexity to your project that you may or may not need.
I would suggest not to uselessly overcomplicate your architecture, to me it sounds like a proper configuration of instancing/concurrency for your services is going to be enough. If you are unsure, write some prototypes faking the expected load for your system. Better safe then sorry, especially if it is going to be a long-lived application.
Upvotes: 0
Reputation: 31760
I suggest you read something about CQRS, which is an architectural pattern which goes some way to addressing the challenge you are facing.
As an example solution for your situation, the following diagram is a CQRS architecture which could satisfy your requirement.
If you want further explanation I would be happy to provide.
UPDATE
The main principal is you separate your database reads and writes into different databases.
Then you use replication to ensure your read database is up to date.
You can also achieve the above architecture by combining both databases into one and both services into one. However, your original question was about database contention based on the fact that you have conflicting database usage patterns.
So this is why I was proposing CQRS as a solution - the write side of the database is decoupled from the read side. The read side can be optimised for selects (even can be de-normalised for speed of access).
This means that you will not face the same contention issues as you would if you are doing both read and write operations through the same interface - this is your current approach.
Additionally you don't need to expose your read operations via a service - simple ADO will do fine and a service endpoint will just introduce latency.
You can still use linq2sql when you read from the read model and also when the db write service updates the data model.
Upvotes: 0
Reputation: 42669
You are designing for the web and hence you need to embrace concurrency, there is no way out. Taking locks and all is not a good way to work on the web in my opinion. Concurrent access is going to happen and you need to think of how to deal with concurrency errors rather than preventing concurrency issues. Any concurrency control mechanism build over web is of limited use and hard to build correctly.
Upvotes: 0
Reputation: 34591
You should not be worried neither about data consistency problems, nor about concurrency while performing database queries. If I understood your situation correctly, all you need to make sure of is that you use transactions consistently while performing a series of database queries which you want to be "atomic". I'll try to explain it on an example:
What you don't want to happen in this scenario is to get into a situation when data is changed by another query after the query from 1
returns and before all queries from 2
are finished. For example, if one of the customers gets deleted in the meanwhile, there's no point in updating the related data - that could even lead to some errors.
So what you need to do is to just put something like BEGIN TRANSACTION
before 1
and COMMIT
after 2
. Look up the exact syntax for the SQL dialect you're using.
This will ensure, basically, that the data you're working with does not change. In fact, it will be locked by your transaction; and all the other queries which may work with the same data are waiting for your transaction to finish. Databases do this kind of locking intelligently, always trying to lock as few data as possible.
Upvotes: 2