Reputation: 1041
I am studying this code from gitHub
about distributed processing. I would like to thank eliben
for this nice post. I have read his explanations but there are some dark spots. As far as I understand, the code is for distributing tasks in multiple machines/clients. My questions are:
The most basic of my questions is where the distribution of the work to different machines is happening?
Why there is an if else statement in the main function?
Let me start this question in a more general way. I thought that we usually start a Process
in a specific chunk (independent memory part) and not pass all the chunks at once like this:
chunksize = int(math.ceil(len(HugeList) / float(nprocs)))
for i in range(nprocs):
p = Process(
target = myWorker, # This is my worker
args=(HugeList[chunksize * i:chunksize * (i + 1)],
HUGEQ)
processes.append(p)
p.start()
In this simple case where we have nprocs
processes. Each process initiate an instance of the function myWorker
that work on the specified chunk.
My question here is:
Looking now into the gitHub
code I am trying to understand the mp_factorizer
? More specifically, in this function we do not have chunks but a huge queue (shared_job_q
). This huge queue is consisted of sub-lists of size 43 maximum. This queue is passed into the factorizer_worker
. There via get
we obtain those sub-lists and pass them into the serial worker. I understand that we need this queue to share data between clients.
My questions here is:
factorizer_worker
function for each of the nprocs
(=8) processes?get
function called from each process thread?Thanks for your time.
Upvotes: 0
Views: 432
Reputation: 94951
The distribution to multiple machines only happens if you actually run the script on multiple machines. The first time you run the script (without the --client option), it starts the Manager server on a specific IP/port, which hosts the shared job/result queues. In addition to starting the Manager server, runserver will also act as a worker, by calling mp_factorizer. It is additionally responsible for collecting the results from the result queue and processing them. You could run this script by itself and get a complete result.
However, you can also distribute the factorization work to other machines, by running the script on other machines using the --client option. That will call runclient, which will connect to the existing Manager server you started with the initial run of the script. That means that the clients are accessing the same shared queues runserver is using, so they can all pull work from and put results to the same queues.
The above should covers questions 1 and 2.
I'm not exactly sure what you're asking in question 3. I think you're wondering why we don't pass a chunk of the list to each worker explicitly (like in the example you included), rather than putting all the chunks into a queue. The answer there is because the runserver
method doesn't know how many workers there are actually going to be. It knows that it's going to start 8 workers. However, it doesn't want to split the HugeList
into eight chunks and send them to the 8 processes it's creating, because it wants to support remote clients connection to the Manager
and doing work, too. So instead, it picks an arbitrary size for each chunk (43
) and divides the list into as many chunks of that size as it takes to consume the entire HugeList
, and sticks it in a Queue
. Here's the code in runserver
that does that:
chunksize = 43
for i in range(0, len(nums), chunksize):
#print 'putting chunk %s:%s in job Q' % (i, i + chunksize)
shared_job_q.put(nums[i:i + chunksize]) # Adds a 43-item chunk to the shared queue.
That way, as many workers as you want can connect to the Manager
server, grab a chunk from shared_job_q
, process it, and return a result.
Do we call an instance of the factorizer_worker function for each of the nprocs(=8) processes?
Yes
Which part of the data each process work? (Generally, we have 8 processes and 43 chunks.)
We don't have 43 chunks. We have X number of chunks, each of size 43. Each worker process just grabs chunks off the queue and processes them. Which part it gets is arbitrary and depends on how many workers there are and how fast each is going.
How many threads exist for each process?
One. If you mean now many worker processes exist for each instance of the script, there are 8 in the server process, and 4 in each client process.
Does get function called from each process thread?
Not sure what you mean by this.
Upvotes: 2