Reputation: 367
I'm writing a Java tcp/http server that needs to handle thousands of connections through a non-blocking I/O selector. So I'm trying to handle all connections inside the same selector thread but some requests my take a long time to complete. What can I do in that situation? Go back to using threads?
Upvotes: 0
Views: 583
Reputation: 1853
There are three ways of doing this, not counting of course the old school one-thread-per-connection way, which as you know does not scale:
You basically use a concurrent queue (i.e. CoralQueue) to distribute the requests’ work (not the requests themselves) to a fixed number of threads that will execute in parallel. Let’s say you have 1000 simultaneous connections. Instead of having 1000 threads you can analyze how many available CPU cores your machine has and choose a much smaller number of threads. The flow would be:
request -> selector -> demux -> worker threads -> mux -> selector -> response
Like @David Schwartz said, you can make your outbound network calls through the same selector (and thread) that's receiving requests. They will be asynchronous network calls that would never block the selector main thread. You can see some source code for this solution in this article.
You can use a distributed system architecture, where the blocking operation is performed in a separate node. So your server would just pass asynchronous messages to nodes responsible for the heavy duty task, wait but never block. For more information about how asynchronous message queues work you can check this article.
The bottom line is that if you are doing I/O you most certainly should go with option #2. If you are doing CPU computations, you most certainly should go with option #1. If you never want to care about that you should think in terms of a distributed system and go with option #3.
Disclaimer: I'm one of the developers of CoralQueue.
Upvotes: 1