Tal Humy
Tal Humy

Reputation: 1227

Lock create/update/delete requests per user

I have a payment system written in Asp.Net Core & Entity Framework & SQL Server. I have to deal with many security situations where I need to prevent 2 or more actions performed by the user at the same time. For example:

Every time a payment transaction happens

  1. Check user balance and block if there is not enough credit.
  2. Perform the payment and reduce the user balance.

Now if a user fires 2 or more request to create payments, some of the requests will past the available credit validation. Since I have many scenarios similar to this one I thought about a general solution that can solve all of them. I thought about adding a middleware which will check each request and to the following:

  1. If the request is a GET request it will pass - this will enable concurrency GET requests.
  2. If the request is a POST/PUT/DELETE request it will check if there is already an existing POST/PUT/DELETE for this specific user (assuming the user is logged in). If there is, a response of bad request will return to the client.

In order to do this in the correct way and support more than 1 server, I understand that I need to do this in the database level. I know that I can lock a specific row in Oracle and thinking about locking the user row on the beginning of each UPDATE/CREAT/DELETE and release it in the end. What is the best approach to do this with EF? Is there a better solution?

I'm using the UnitOfWork pattern which each request has its own scope.

Upvotes: 3

Views: 3295

Answers (2)

Kris
Kris

Reputation: 512

Stumbled recently accross the same thoughts. For some reason some of my users are able to post a formular twice, which results in duplicated data.

Even if this is a old question, i hope it will helps someone.

Like you mentioned, one approach is to use the database for the locking of properties but like you, i couldn't find a solid implementation. I'm also assuming that you have a monolithic application as @felix-b mentioned a very good solution.

I gone the way of making the threads that normally would run concurrent to run in sequence. This solution may suffer some disadvantages, but i could not find any. Please let me know your thoughts.

So i solved it with a dictonary containing the UserId and a SemaphoreSlim.

Then i simply marked the controllers with a ActionFilter and throttle the execution of a controller method per user.

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class AtomicOperationPerUserAttribute : ActionFilterAttribute
{
    private readonly ILogger<AtomicOperationPerUserAttribute> _logger;
        private readonly IConcurrencyService _concurrencyService;

        public AtomicOperationPerUserAttribute(ILogger<AtomicOperationPerUserAttribute> logger, IConcurrencyService concurrencyService)
        {
            _logger = logger;
            _concurrencyService = concurrencyService;
        }

        public override void OnActionExecuting(ActionExecutingContext context)
        {
            int userId = YourWayToGetTheUserId; //mine was with context.HttpContext.AppSpecificExtensionMethod()

            _logger.LogInformation($"User {userId} claims sempaphore with RequestId {context.HttpContext.TraceIdentifier}");

            var semaphore = _concurrencyService.SemaphorePerUser(userId);

            semaphore.Wait();
        }

        public override void OnActionExecuted(ActionExecutedContext context)
        {
            int userId = YourWayToGetTheUserId; //mine was with context.HttpContext.AppSpecificExtensionMethod()

            var semaphore = _concurrencyService.SemaphorePerUser(userId);

            _logger.LogInformation($"User {userId} releases sempaphore with RequestId {context.HttpContext.TraceIdentifier}");
            
            semaphore.Release();
        }
    }

The "ConcurrentService" is a Singleton registered in the Startup.cs.

public interface IConcurrencyService
{
    SemaphoreSlim SemaphorePerUser(int userId);
}

public class ConcurrencyService : IConcurrencyService
{
    public static ConcurrentDictionary<int, SemaphoreSlim> Semaphors = new ConcurrentDictionary<int, SemaphoreSlim>();

    public SemaphoreSlim SemaphorePerUser(int userId)
    {
        return Semaphors.GetOrAdd(userId, new SemaphoreSlim(1, 1));
    }
}

Since in my case i needed dependencies in the ActionFilter is mark a controller action with [ServiceFilter(typeof(AtomicOperationPerUserAttribute))]

accordingly i registered the "services" in the Startup.cs

services.AddScoped<AtomicOperationPerUserAttribute>();
services.AddSingleton<IConcurrencyService, ConcurrencyService>();
    

Upvotes: 1

felix-b
felix-b

Reputation: 8498

I'd vote against using row locks as a mechanism of request synchronization:

  • even though Oracle is known for not escalating row locks, there are transaction-level and other optimizations that may decide to escalate, which can lead to reduced scalability and deadlocks.

  • if two users want to transfer money to each other, there's a chance they'll be deadlocked. (If you don't have such a feature right now, you may have it in the future, so better create an architecture that wouldn't be invalidated that easily).

Now, a system that returns "bad request" just because another request from the same user happens to take a longer time, is of course a fail-safe system, but its reliability (a metric of running without failures) suffers. I'd expect any payment system in the world to be both fail-safe and reliable.

Is there a better solution?

An architecture based on CQRS and shared-nothing approaches:

  • ASP.NET server ("web tier"):
    • directly performs read (GET) operations, as it does now
    • submits write (POST/PUT/DELETE) operations into a queue, and returns HTTP 200 immediately.
  • Application tier: a cluster of (micro)services that fetch and perform the write requests, in a shared-nothing manner:
    • at any moment, requests from any particular user are processed by at most one thread in the whole system (across all processes and machines).
    • the shared-nothing approach ensures that you never have to concurrently process requests from the same user.

Implementing shared-nothing

The shared-nothing architecture can be implemented by partitioning (AKA sharding). For example:

  • you have N processing threads running (inside some processes) on a cluster of M machines
  • each machine is assigned a unique role to run a specific range of threads out of these N
  • each request from a user is always dispatched to the same specific thread by calculating: thread_index = HASH(User) % N, or if User ID is an integer: thread_index = USER_ID % N.
  • how the dispatched requests are passed to the processing threads depends on the chosen queue. For example, web tier can submit requests to N different topics, or it can directly push the requests to a distributed actor (see Akka.Net), or you can just use database table as a queue, and make each thread fetch the requests that belong to it.

In addition, you'll need an orchestrator to ensure that each of the M machines is up and running. If a machine goes down, the orchestrator spins up another machine with the same role. For example, if you dockerize your services, you can use Kubernetes with StatefulSet.

Upvotes: 4

Related Questions