Reputation: 225
I have a design pattern I have been struggling with on how best to prevent duplicate posting of data.
Here are the steps:
Here are the scenarios: Client submits data with guid "1", and then resubmits data with guid "1" before step (5) is hit for the original data submission, then the transaction is processed twice.
What is the best design pattern to prevent this without using semaphores or blocking? The user should be able to resubmit, in case the first submission fails for some reason (hardware issue on the server side, etc.).
Thanks!
Upvotes: 5
Views: 2605
Reputation: 1
Make a table running_transactions where you store GUIDs of transactions which are running currently i.e.
Note- This approach works only if this table remains light (not more than 1Mrecords) and select and delete works quick. Also you can index GUID column.
Upvotes: 0
Reputation: 27900
Store the GUID in a column with an SQL UNIQUE
constraint.
When you attempt (within the transaction) to insert a second duplicate GUID, the operation will fail, at which point you roll back the entire transaction.
Upvotes: 1
Reputation: 1
Could you hash the data that the user is providing and store it in a table -- check that the hash doesn't match any previous submission before continuing?
Upvotes: 0
Reputation: 1072
You could implement step 2 by using a query which reads uncommitted data. For instance, if you are using MS SQL Server, you could do this:
IF NOT EXIST(SELECT * FROM SomeTable (NOLOCK) WHERE Guid = @ClienGUID)
BEGIN
-- Insert the GUID ASAP in the transaction so that the next query will read it
-- Do steps 3-5
END
The key here is the (NOLOCK) hint, which reads uncommitted data
Upvotes: 1
Reputation: 561
I don't know what you are using to develop your front end, but in a web application you can use ajax to check on the server transition status giving the user some feedback while waiting and also disabling the submit option.
Upvotes: 0