Reputation: 1227
I have a payment system written in Asp.Net Core & Entity Framework & SQL Server. I have to deal with many security situations where I need to prevent 2 or more actions performed by the user at the same time. For example:
Every time a payment transaction happens
Now if a user fires 2 or more request to create payments, some of the requests will past the available credit validation. Since I have many scenarios similar to this one I thought about a general solution that can solve all of them. I thought about adding a middleware which will check each request and to the following:
In order to do this in the correct way and support more than 1 server, I understand that I need to do this in the database level. I know that I can lock a specific row in Oracle and thinking about locking the user row on the beginning of each UPDATE/CREAT/DELETE and release it in the end. What is the best approach to do this with EF? Is there a better solution?
I'm using the UnitOfWork pattern which each request has its own scope.
Upvotes: 3
Views: 3295
Reputation: 512
Stumbled recently accross the same thoughts. For some reason some of my users are able to post a formular twice, which results in duplicated data.
Even if this is a old question, i hope it will helps someone.
Like you mentioned, one approach is to use the database for the locking of properties but like you, i couldn't find a solid implementation. I'm also assuming that you have a monolithic application as @felix-b mentioned a very good solution.
I gone the way of making the threads that normally would run concurrent to run in sequence. This solution may suffer some disadvantages, but i could not find any. Please let me know your thoughts.
So i solved it with a dictonary containing the UserId and a SemaphoreSlim.
Then i simply marked the controllers with a ActionFilter and throttle the execution of a controller method per user.
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class AtomicOperationPerUserAttribute : ActionFilterAttribute
{
private readonly ILogger<AtomicOperationPerUserAttribute> _logger;
private readonly IConcurrencyService _concurrencyService;
public AtomicOperationPerUserAttribute(ILogger<AtomicOperationPerUserAttribute> logger, IConcurrencyService concurrencyService)
{
_logger = logger;
_concurrencyService = concurrencyService;
}
public override void OnActionExecuting(ActionExecutingContext context)
{
int userId = YourWayToGetTheUserId; //mine was with context.HttpContext.AppSpecificExtensionMethod()
_logger.LogInformation($"User {userId} claims sempaphore with RequestId {context.HttpContext.TraceIdentifier}");
var semaphore = _concurrencyService.SemaphorePerUser(userId);
semaphore.Wait();
}
public override void OnActionExecuted(ActionExecutedContext context)
{
int userId = YourWayToGetTheUserId; //mine was with context.HttpContext.AppSpecificExtensionMethod()
var semaphore = _concurrencyService.SemaphorePerUser(userId);
_logger.LogInformation($"User {userId} releases sempaphore with RequestId {context.HttpContext.TraceIdentifier}");
semaphore.Release();
}
}
The "ConcurrentService" is a Singleton registered in the Startup.cs.
public interface IConcurrencyService
{
SemaphoreSlim SemaphorePerUser(int userId);
}
public class ConcurrencyService : IConcurrencyService
{
public static ConcurrentDictionary<int, SemaphoreSlim> Semaphors = new ConcurrentDictionary<int, SemaphoreSlim>();
public SemaphoreSlim SemaphorePerUser(int userId)
{
return Semaphors.GetOrAdd(userId, new SemaphoreSlim(1, 1));
}
}
Since in my case i needed dependencies in the ActionFilter is mark a controller action with [ServiceFilter(typeof(AtomicOperationPerUserAttribute))]
accordingly i registered the "services" in the Startup.cs
services.AddScoped<AtomicOperationPerUserAttribute>();
services.AddSingleton<IConcurrencyService, ConcurrencyService>();
Upvotes: 1
Reputation: 8498
I'd vote against using row locks as a mechanism of request synchronization:
even though Oracle is known for not escalating row locks, there are transaction-level and other optimizations that may decide to escalate, which can lead to reduced scalability and deadlocks.
if two users want to transfer money to each other, there's a chance they'll be deadlocked. (If you don't have such a feature right now, you may have it in the future, so better create an architecture that wouldn't be invalidated that easily).
Now, a system that returns "bad request" just because another request from the same user happens to take a longer time, is of course a fail-safe system, but its reliability (a metric of running without failures) suffers. I'd expect any payment system in the world to be both fail-safe and reliable.
Is there a better solution?
An architecture based on CQRS and shared-nothing approaches:
The shared-nothing architecture can be implemented by partitioning (AKA sharding). For example:
thread_index = HASH(User) % N
, or if User ID is an integer: thread_index = USER_ID % N
.In addition, you'll need an orchestrator to ensure that each of the M machines is up and running. If a machine goes down, the orchestrator spins up another machine with the same role. For example, if you dockerize your services, you can use Kubernetes with StatefulSet.
Upvotes: 4