Justin Homes
Justin Homes

Reputation: 3799

SignalR for chat for big users base

I have an ASP.NET MVC3 application.

If my application had a large number of users – let's say 100,000 – hypothetically, if all users were talking to each other and I used SignalR, would there be 100,000 long polling connections? Would these cause some sort of denial of service?

Should I be using AJAX HTTP instead? Or would SignalR be smart enough to release the connection to resource pool when no activity is found for certain period of time?

When would using signalR for chat be recommended for chat versus AJAX?

Upvotes: 4

Views: 3604

Answers (1)

BenSwayne
BenSwayne

Reputation: 16900

You can disable all the DoS attack protections on a windows server fairly easily. However, that won't necessarily solve your problem. 100,000 connections will require multiple servers for something like this.

Your first limitation is that per IP address you've only got about 65,000 possible tcp ports to service connections (65,535 but less reserved ports, etc rounded off). So you either need a massive server with multiple IP addresses (likely unreliable and a single point of failure in your system/app) or you need multiple servers behind some kind of load balancer.

Also with long polling, you are seeing a consistent "turn over" of connections as each long polling connection ends and a new one is started. TCP ports are not reused instantly but rather the fastest configurable TCP timed wait delay is 30 seconds. So even 65,000 connection is unrealistic, I'd half that just for port reuse. Then you need to factor in any other http requests arriving at that server for web pages, rest api, or other static resources. Then factor in any other processing the processors/memory must do for saving/formatting data. So I'd further reduce that probably another half. I'd say 15,000 clients per server is a realistic maximum. So for 100,000 user you're looking at probably 7 servers minimum in a load balanced cluster.

Last I checked SignalR does not work in a multi-server environment like that. Likewise AJAX or any other "frequent refresh" method will suffer similar physical limitations for the number of available tcp ports/sockets, etc. You just can't service 100,000 clients on one server with a high frequency of http requests like that.

I've done a fair amount of testing with large scale loads like this using WebSync for ASP.net using multiple servers on Amazon EC2. I work for FrozenMountain and one of my jobs this last year was to do some multi-server load balanced testing on Amazon EC2 Cloud. Amazon cloud services offer a nice sticky load balancer and easy duplication of your servers for testing. In "laboratory conditions" (dedicated servers not doing other stuff) we could exceed 20k clients on Amazon's "large instance" which is a quad core server with 7.5gb ram.

Its also worth pointing out that with the latest Server/IIS/Websync you get WebSockets support which will help reduce the port turnover and reuse as each client can maintain a single persistent socket connection to the web server. That can potentially increase your client count per server. So you might be able to go from a 7 server cluster down to maybe 4-5 servers depending on the adoption rate of websocket compatible browsers/clients. (Web browser based JavaScript clients won't have a high adoption rate, native devices like iPhones or Android devices will all have websocket support, so you'll see the full benefits right away). WebSync will fail back from WebSockets to Long Polling if the client doesn't support web sockets.

Upvotes: 6

Related Questions