Theis Kristensen
Theis Kristensen

Reputation: 415

Concurrent web api requests and how to handle state in ASP.NET core

ASP.NET core 2.1 application (Entity Framework) that consist of multiple web api endpoints. One of them is a "join" endpoint, where users can join a queue. Another is a "leave" endpoint, where users can leave the queue.

The queue has 10 available spots.

If all 10 spots are filled out, we throw a message back saying, "Queue is full."

If exactly 3 users has joined, we return true.

If the number of joined users is NOT 3 then we return false.

200 trigger-happy users are ready to join and leave different queues. They are all calling the "join" and "leave" endpoint concurrently.

This means that we have to handle the requests coming in is a sequential way, to be able to ensure that users are added and removed to the correct queues in a nice and controlled manner. (right?)

One option is to add the QueueService class as AddSingleton<> in the IServiceCollection and then make a lock() to ensure only one user at a time can enter. However, how do we when handle the dbContext, beacause it is registered as AddTransient<> or AddScoped<>?

Psedudo code for the join part:

public class QueueService
{
    private readonly object _myLock = new object();
    private readonly QueueContext _context;

    public QueueService(QueueContext context)
    {
        _context = context;
    }

    public bool Join(int queueId, int userId)
    {
        lock (_myLock)
        {
            var numberOfUsersInQueue = _context.GetNumberOfUsersInQueue(queueId); <- Problem. 
            if (numberOfUsersInQueue >= 10)
            {
                throw new Exception("Queue is full.");
            }
            else
            {
                _context.AddUserToQueue(queueId, userId); <- Problem.
            }

            numberOfUsersInQueue = _context.GetNumberOfUsersInQueue(queueId); <- Problem.

            if (numberOfUsersInQueue == 3)
            {
                return true;
            }
        }
        return false;
    }
}

Another option is to make the QueueService transient, but then I lose the state of the service, and a new instance is provided for every request, making the lock () pointless.

Questions:

[1] Should I handle the state of the queues in memory instead? If yes, how to align it with the database?

[2] Are there a brilliant pattern I have missed? Or how could I handle this differently?

Upvotes: 4

Views: 10661

Answers (2)

Theis Kristensen
Theis Kristensen

Reputation: 415

I ended up with this simple solution.

Create a static (or singleton) class with a ConcurrentDictionary<int, object> which takes the queueId and a lock.

When a new queue is created, add the queueId and a new locking object to the dictionary.

Make the QueueService class AddTransient<> and then:

public bool Join(int queueId, int userId)
    {
        var someLock = ConcurrentQueuePlaceHolder.Queues[queueId];
        lock (someLock)
        {
            var numberOfUsersInQueue = _context.GetNumberOfUsersInQueue(queueId); <- Working
            if (numberOfUsersInQueue >= 10)
            {
                throw new Exception("Queue is full.");
            }
            else
            {
                _context.AddUserToQueue(queueId, userId); <- Working
            }

            numberOfUsersInQueue = _context.GetNumberOfUsersInQueue(queueId); <- Working

            if (numberOfUsersInQueue == 3)
            {
                return true;
            }
        }
        return false;
    }

No more issues with the _context.

In this way I can handle the concurrent requests in a nice and controlled way.

If at some point multiple servers would come into play, a message-broker or ESB could also be a solution.

Upvotes: 6

usr
usr

Reputation: 171178

This means that we have to handle the requests coming in is a sequential way, to be able to ensure that users are added and removed to the correct queues in a nice and controlled manner. (right?)

Essentially yes. The requests can execute concurrently but they have to synchronize with one another is some way (e.g. lock or database transactions).

Should I handle the state of the queues in memory instead?

Often, it is best to make web application stateless and requests independent. Under this model state is stored in a database. The tremendous advantage is that no synchronization needs to take place and the application can run on multiple servers. Also, if the application restarts (or crashes) no state is lost.

This seems entirely possible and appropriate here.

Put the state in a relational database and use concurrency control to make concurrent access safe. For example, use the serializable isolation level and retry in case of deadlock. This gives you a very nice programming model where you pretend you are the only user of the database yet it is perfectly safe. All access must be put into a transaction and in case of deadlock the entire transaction must be retried.

If you insist on in-memory state then split the singleton and transient components. Put the global state into a singleton class and put the operations acting on that state in a transient one. That way dependency injection of transient components such as database access is very easy and clean. The global class should be tiny (maybe just the data fields).

Upvotes: 6

Related Questions