ChrisIkeokwu
ChrisIkeokwu

Reputation: 121

How to calculate average concurrently in python?

I defined two correct ways of calculating averages in python.

def avg_regular(values):
    total = 0
    for value in values:
        total += value
    return total/len(values)

def avg_concurrent(values):
    mean = 0
    num_of_values = len(values)
    for value in values:
        #calculate a small portion of the average for each num and add to the total
        mean += value/num_of_values  
    return mean

The first function is the regular way of calculating averages, but I wrote the second one because each run of the loop doesn't depend on previous runs. So theoretically the average can be computed in parallel.

However, the "parallel" one (without running in parallel) takes about 30% more time than the regular one.

Are my assumptions correct and worth the speed loss? if yes how can I make the second function run the second one parrallely?

if not, where did I go wrong?

Upvotes: 0

Views: 2452

Answers (1)

syntonym
syntonym

Reputation: 7384

The code you implemented is basically the difference between (a1+a2+ ... + an) / n and (a1/n + a2/n + ... + an/n). The result is the same, but in the second version there are more operations (namely (n-1) more divisions) which slows the calculation down. You claimed that in the second version each loop run is independent of the others. In the first loop we need the following information to finish one loop run: total before the run and the current value. In the second version we need the following information to finish one loop run: mean before the run, the current value and num_of_values. As you see in the second version we even depend on more values!

But how could we divide the work between cores (which is the goal of multiprocessing)? We could just give one core the first half of the values and the second the second half, i.e. ((a1+a2+ ... + a(n//2)) + ( a(n//2 +1) + ... + a(n)) / n). Yes, the work of dividing by n is not splitted between the cores, but it's a single instruction so we don't really care. Also we need to add the left total and the right total, which we can't split, but again it's only a single operation.

So the code we want to run:

def my_sum(values):
    total = 0
    for value in values:
        total += value
    return total

There's still a problem with python - normally one could use threads to do the computations, because each thread will use one core. But in that case one has to take care that your program does not run into race conditions, and the python interpreter itself also needs to take care of that. CPython decided it's not worth it and basically only runs in one thread at a time. A basic solution is to use multiple processes via multiprocessing.

from multiprocessing import Pool

if __name__ == '__main__':

    with Pool(5) as p:
        results = p.map(my_sum, [long_list[0:len(long_list)//2], long_list[len(long_list)//2:]))

    print(sum(results) / len(long_list)) # add subresults and divide by n

But of course multiple processes do not come for free. You need to fork, copy stuff, etc. so you will not gain a speedup of 2 as one could expect. Also the biggest slowdown is actually using python itself, it's not really optimized for fast numerical computations. There are various ways around that, but using numpy is probably the simplest. Just use:

import numpy
print(numpy.mean(long_list))

Which is probably much faster than the python version. I don't think numpy uses multiprocessing internal, so one could gain a boost by using multiple processes and a fast implementation (numpy or something other written in C) but normally numpy is fast enough.

Upvotes: 1

Related Questions