Reputation: 105053
Here is what's happening in my RESTful web app:
What should it do? Fail the request and return an error to a client? Or it should start from scratch (taking more time than the client expects)?
Upvotes: 8
Views: 10539
Reputation: 4877
Assuming it not about db transactions and say distributed long running processes are involved in each step.
In this scenario the client should be sent an appropriate response (something like 409/410 http codes) with details indicating that this request is no more valid and the client should try again. Retrying could end up in loops or worst case end up doing what client did not know.
Example, when you book a hotel/ticket online you get a response saying the price has since changed and you need to submit again to buy with new price.
Upvotes: 2
Reputation: 13941
I think a good starting point is the concurrency model used by CouchDB. In essence:
More reading:
http://wiki.apache.org/couchdb/Technical%20Overview#ACID_Properties http://wiki.apache.org/couchdb/HTTP_Document_API#PUT
Upvotes: 4
Reputation: 54074
From my point of view your question is the same as like:
"If I try to do a read from the database, and another transaction tries to do a write, it will block. But when I finish my read, I will have missed the new data that will be populated by the new transaction that comes in after my read."
This is a bad way to think about it. You should make sure that the clients get consistent data in the responses. If the data have been updated by the time they get the response that is not problem of the original method.
Your problem is that the data are currently updated, and I happen to know. What if the data are updated right after the response goes out the network?
IMHO choose the simplest solution that fits your requirements.
The clients should "poll" more frequently to make sure they always have the most recent copy of the data
Upvotes: 1
Reputation: 916
Well strictly speaking a race condition is a bug. The solution to a race condition is not have shared data. If that is not to be avoided for a given use case, then a first come - first serve basis is usually helpful:
The first request locks the shared data and the second waits until the first is done with it.
Upvotes: -1
Reputation: 65274
IMHO you should treat a REST request very close to how you treat a DB transaction:
Very often this can actually be handed down to a DB transaction - depending on how much and what non-DB work your request does.
Upvotes: 4