Ari
Ari

Reputation: 3127

Handling limitations in multithreaded server

In my client-server architecture I have few API functions which usage need to be limited. Server is written in .net C# and it is running on IIS.

Until now I didn't need to perform any synchronization. Code was written in a way that even if client would send same request multiple times (e.g. create sth request) one call will end with success and all others with error (because of server code + db structure).

What is the best way to perform such limitations? For example I want no more that 1 call of API method: foo() per user per minute.

I thought about some SynchronizationTable which would have just one column unique_text and before computing foo() call I'll write something like foo{userId}{date}{HH:mm} to this table. If call end with success I know that there wasn't foo call from that user in current minute.

I think there is much better way, probably in server code, without using db for that. Of course, there could be thousands of users calling foo.

To clarify what I need: I think it could be some light DictionaryMutex.

For example:

private static DictionaryMutex FooLock = new DictionaryMutex();

FooLock.lock(User.GUID);
try
{
    ...
}
finally
{
    FooLock.unlock(User.GUID);
}

EDIT: Solution in which one user cannot call foo twice at the same time is also sufficient for me. By "at the same time" I mean that server started to handle second call before returning result for first call.

Upvotes: 3

Views: 85

Answers (1)

usr
usr

Reputation: 171226

Note, that keeping this state in memory in an IIS worker process opens the possibility to lose all this data at any instant in time. Worker processes can restart for any number of reasons.

Also, you probably want to have two web servers for high availability. Keeping the state inside of worker processes makes the application no longer clustering-ready. This is often a no-go.

Web apps really should be stateless. Many reasons for that. If you can help it, don't manage your own data structures like suggested in the question and comments.

Depending on how big the call volume is, I'd consider these options:

  1. SQL Server. Your queries are extremely simple and easy to optimize for. Expect 1000s of such queries per seconds per CPU core. This can bear a lot of load. You can use a SQL Express for free.
  2. A specialized store like Redis. Stack Overflow is using Redis as a persistent, clustering-enabled cache. A good idea.
  3. A distributed cache, like Microsoft Velocity. Or others.

This storage problem is rather easy because it fits a key/value store model well. And the data is near worthless so you don't even need to backup.

I think you're overestimating how costly this rate limitation will be. Your web-service is probably doing a lot more costly things than a single UPDATE by primary key to a simple table.

Upvotes: 3

Related Questions