Reputation: 84822
I can't seem to find a good and accessible explanation of "Leader/Follower" pattern. All explanations are simply completely meaningless as in 1.
Can anyone explain to the the mechanics of how this pattern works, and why and how it improves performance over more traditional asynchronous IO models? Examples and links to diagrams are appreciated too.
Upvotes: 47
Views: 15464
Reputation: 8908
Most people are familiar with the classical pattern of using one thread per request. This can be illustrated as in the following diagram. The server maintains a thread pool and every incoming request is assigned a thread that will be responsible to process it.
In the leader/follower pattern, one of these threads is designated as the current leader. This thread is responsible for listening to multiple connections. When a request comes in:
The following diagram illustrates how that works at a high level.
Compared to the one-thread-per-request pattern, the leader/follower pattern makes more efficient use of resources, especially in scenarios with a big number of concurrent connections. This is due to the fact that the one-thread-per-request would require a separate thread per connection that is costly, which would make it infeasible for some situations with limited resources.
The following paper contains a more thorough analysis of the pattern and advantages/disadvantages when compared with other techniques: http://www.kircher-schwanninger.de/michael/publications/lf.pdf
Upvotes: 4
Reputation: 1060
As you might have read, the pattern consists of 4 components: ThreadPool, HandleSet, Handle, ConcreteEventHandler (implements the EventHandler interface).
You can think of it as a taxi station at night, where all the drivers are sleeping except for one, the leader. The ThreadPool is a station managing many threads - cabs.
The leader is waiting for an IO event on the HandleSet, like how a driver waits for a client.
When a client arrives (in the form of a Handle identifying the IO event), the leader driver wakes up another driver to be the next leader and serves the request from his passenger.
While he is taking the client to the given address (calling ConcreteEventHandler and handing over Handle to it) the next leader can concurrently serve another passenger.
When a driver finishes he take his taxi back to the station and falls asleep if the station is not empty. Otherwise he become the leader.
The pros for this pattern are:
The cons are:
Upvotes: 88
Reputation: 4995
I want to add to Jake's answer by linking another PDF from the same author that details a use case where they chose the Leader/Follower pattern over other alternatives: http://www.dre.vanderbilt.edu/~schmidt/PDF/OM-01.pdf
Upvotes: 3
Reputation: 49
I think the most 'hands on' example is phusian passenger and a rails application. (Kind of misleading passenger is actually a mod_rails apache process that manages many ruby apps that are sleeping/parked).
So you develop a rails app or sinatra app. Then deploy it on a webserver with passenger installed.
The leader is actually the Passenger load balancer. The passenger is someone visiting a webpage. The taxi cabs are the sleeping instances of your rails app in the passenger device pool.
When you set this pool to be 45 (database.yml in rails app). Your saying I want 45 taxis ready to serve the webpages. When someone visites a virtual host passenger delegates the request to one of the 45 waiting rails apps. The apps don't have to communicate with eachother because they're all connected to the same database backend (or multiple is possible too if you do replication of your data).
The cool thing is that even though the seperate processes might take a while to process a request the total system is really efficient/fast because you have 45 of them ready to handle the requests. So even if the first taxi (rails app) isn't back from it's ride (serving a requested page) the second waiting instance in line can be used for a next request. As soon as the first finishes it also goes back to the queue and this way you can be responsive and get easily 4000+ pagereq./sec even though a single rails app can only handle 400req/sec. Ofcourse there are limitations (memory etc for the pool size, otherwise you could take a poool size of 200000 rails apps) but in practice this works very well...
Upvotes: 0
Reputation: 2065
http://www.kircher-schwanninger.de/michael/publications/lf.pdf Best I can do for ya.
Upvotes: 0