Reputation: 51261
a = [(1,2),(3,1),(4,4),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5),(5,5)]
# Quite a lot tuples in the list, 6 digits~
# I want to split it into rows and columns.
rows = 5
cols = 5
Data structure is
rows and cols are the index for the bit list
[rows, cols, (data)]
I use loop to do this, but it takes too long for processing a big amount of tuples.
processed_data = []
index = 0
for h in range(0, rows - 1):
for w in range(0, cols - 1):
li = []
li = [h, w, a[index]]
processed_data.append(li)
index += 1
This operation takes too long, is there a way to do optimization? Thanks very much!
Upvotes: 0
Views: 420
Reputation: 71014
It's not at all clear to me what you want but here's a shot at the same loop in a more optimized manner:
import itertools as it
index = it.count(0)
processed_data = [[h, w, a[next(index)]]
for h in xrange(0, rows - 1)
for w in xrange(0, cols - 1)]
or, since you've already imported itertools,
index = ite.count(0)
indices = it.product(xrange(0, rows-1), xrange(0, cols-1))
processed_data = [[h, w, a[next(index)]] for h, w in indices]
The reason that these are faster is that they use list comprehensions instead of for
loops. List comprehensions have their own opcode, LIST_APPEND, which routes directly to the append
method on the list that's being constructed. In a normal for
loop, the virtual machine has to go through the whole processes of looking up the append
method on the list object which is fairly pricey.
Also, itertools is implemented in C so if it's not faster for the same algorithm, then there's a bug in itertools.
Upvotes: 2
Reputation: 798686
Fine, if you really want the indices that badly...
[divmod(i, cols) + (x,) for i, x in itertools.izip(itertools.count(), a)]
Upvotes: 2
Reputation: 798686
Sounds like you want to split it into evenly-sized chunks.
Upvotes: 0