Reputation: 321
I have an issue with concurrent requests to (any) one API endpoint (ASP.NET Web API 2). When the second request starts to get processed before the first one is completed there can be concurrency issues with database entities.
Example:
1: Request R1 starts to be processed, it reads entities E1 and E2 into memory
2: Request R2 starts to be processed, it reads entities E2 and E1 into memory
3: Request R1 updates object E1
4: Request R1 creates E3, if E1 and E2 have been updated
5: Request R2 updates object E2
6: Request R2 creates E3, if E1 and E2 have been updated
The problem is that both requests read the entities E1 and E2 before the other request updated it. So both see outdated, non-updated versions and E3 is never created.
All requests are processed in a single database transaction (isolation level "read committed", Entity Framework 6 on SQL Server 2017). In my current understanding, I don't think the issue can be solved on the data access/data store level (e.g. stricter database isolation level or optimistic locking) but needs to be higher up in the code hierarchy (request level).
My proposal is to pessimistically lock all entities that could be updated when entering an API method. When another API request is started that needs a write-lock on any entitiy that is already locked it needs to wait until all previous locks are released (queue). A request is normally processed in less than 250ms.
This could be implemented with a custom attribute that decorates the API methods: [LockEntity(typeof(ExampleEntity), exampleEntityId)]
. A single API method could be decorated with zero to many of these attributes. The locking mechanism would work with async/await if possible or just put the thread to sleep. The lock synchronization needs to work across multiple servers so the easiest option I see here is a representation in the applications data store (single, global one).
This approach is complex and has some performance disadvantages (blocks request processing, performs DB queries during lock processing) so I am open to any suggestions or other ideas.
My questions:
1. Are there any terms that describe this problem? It seems like a very common thing but it was not known to me so far.
2. Are there any best practices on how to solve this?
3. If not, does my proposal make sense?
Upvotes: 2
Views: 1363
Reputation: 1764
I wouldn't hack around the stateless of the web and REST and not block the requests. 250ms is a quarter second what is pretty slow IMO. E.g 100 concurrent requests will block each other and at least one stream will wait for 25 seconds! This slows down the application a lot.
I've seen some apps (including Atlassians Confluence) that notifies the user, that the current data set has changed in the meanwhile, by another user. (Sometime the other user was mentioned as well.) I would do it the same way using Websockets, via e.g. SignalR. Other apps like service-now, are merging the remote changes into the current open set of data.
If you don't want to use Websockets, you could also try to merge or to check on the server side and to notify the user with a specific HTTP Status code and an message. Tell the user what changed in the meanwhile, try to merge and ask whether the merge is fine or not.
Upvotes: 1