Reputation: 255
On Linux it takes 1.09171080828 secs.
On Windows it takes 2.14042000294 secs.
Code for the benchmark:
import time
def mk_array(num):
return [x for x in xrange(1,num)]
def run():
arr = mk_array(10000000)
x = 0
start = time.time()
x = reduce(lambda x,y: x + y, arr)
done = time.time()
elapsed = done - start
return elapsed
if __name__ == '__main__':
times = [run() for x in xrange(0,100)]
avg = sum(times)/len(times)
print (avg)
I am aware that the GIL creates more or less single threaded scripts.
Windows box is my Hyper-V host but should be beefy enough to run a single threaded script at full bore. 12-cores 2.93Ghz Intel X5670s, 72GB ram, etc.
Ubuntu VM has 4-cores and 8GB of ram.
Both are running Python 2.7.8 64-bit.
Why is windows half as fast?
edit: I've lopped two zeros and linux finishes in 0.010593495369 seconds, and windows in 0.171899962425 seconds. Thanks everyone, curiosity satisfied.
Upvotes: 4
Views: 106
Reputation: 180411
It is because of the size of the long in windows, in windows a long
is 32
bits in unix it is 64
bits so you are hitting Arbitrary-precision_arithmetic issues sooner which are costlier to allocate.
Related question Why are python's for loops so non-linear for large inputs
If you benchmark the xrange alone you will see a considerable difference.
The reasoning behind windows use of LLP64 seems to be compatibility with 32-bit code:
Another alternative is the LLP64 model, which maintains compatibility with 32-bit code by leaving both int and long as 32-bit. "LL" refers to the "long long integer" type, which is at least 64 bits on all platforms, including 32-bit environments.
Taken from wiki/64-bit_computing
Upvotes: 3