ming.kernel
ming.kernel

Reputation: 3665

Why is processing a sorted array not faster than an unsorted array in Python?

In this post Why is processing a sorted array faster than random array, it says that branch predicton is the reason of the performance boost in sorted arrays.

But I just tried the example using Python; and I think there is no difference between sorted and random arrays (I tried both bytearray and array; and use line_profile to profile the computation).

Am I missing something?

Here is my code:

from array import array
import random
array_size = 1024
loop_cnt = 1000
# I also tried 'array', and it's almost the same
a = bytearray(array_size)
for i in xrange(array_size):
    a.append(random.randint(0, 255))
#sorted                                                                         
a = sorted(a)
@profile
def computation():
    sum = 0
    for i in xrange(loop_cnt):
        for j in xrange(size):
            if a[j] >= 128:
                sum += a[j]

computation()
print 'done'

Upvotes: 14

Views: 2091

Answers (5)

Evenure
Evenure

Reputation: 197

Click here to see more answers and similar question. The reason why the performance improves drastically when the data are sorted is that the branch prediction penalty is removed, as explained beautifully in Mysticial's answer.

Upvotes: -3

user1591276
user1591276

Reputation: 193

I ported the original code to Python and ran it with PyPy. I can confirm that sorted arrays are processed faster than unsorted arrays, and that the branchless method also works to eliminate the branch with running time similar to the sorted array. I believe this is because PyPy is a JIT compiler and so branch prediction is happening.

[edit]

Here's the code I used:

import random
import time

def runme(data):
  sum = 0
  start = time.time()

  for i in xrange(100000):
    for c in data:
      if c >= 128:
        sum += c

  end = time.time()
  print end - start
  print sum

def runme_branchless(data):
  sum = 0
  start = time.time()

  for i in xrange(100000):
    for c in data:
      t = (c - 128) >> 31
      sum += ~t & c

  end = time.time()
  print end - start
  print sum

data = list()

for i in xrange(32768):
  data.append(random.randint(0, 256))

sorted_data = sorted(data)
runme(sorted_data)
runme(data)
runme_branchless(sorted_data)
runme_branchless(data)

Upvotes: 5

Matteo Italia
Matteo Italia

Reputation: 126827

I may be wrong, but I see a fundamental difference between the linked question and your example: Python interprets bytecode, C++ compiles to native code.

In the C++ code that if translates directly to a cmp/jl sequence, that can be considered by the CPU branch predictor as a single "prediction spot", specific to that cycle.

In Python that comparison is actually several function calls, so there's (1) more overhead and (2) I suppose the code that performs that comparison is a function into the interpreter used for every other integer comparison - so it's a "prediction spot" not specific to the current block, which gives the branch predictor a much harder time to guess correctly.


Edit: also, as outlined in this paper, there are way more indirect branches inside an interpreter, so such an optimization in your Python code would probably be buried anyway by the branch mispredictions in the interpreter itself.

Upvotes: 19

user622367
user622367

Reputation:

sorted() returns a sorted array rather than sorting in place. You're actually measuring the same array twice.

Upvotes: 4

Mark Ransom
Mark Ransom

Reputation: 308206

Two reasons:

  • Your array size is much too small to show the effect.
  • Python has more overhead than C so the effect will be less noticeable overall.

Upvotes: 5

Related Questions