Reputation: 78234
I have a list that looks like this:
list=[
('2013-01-04', u'crid2557171372', 1),
('2013-01-04', u'crid9904536154', 719677),
('2013-01-04', u'crid7990924609', 577352),
('2013-01-04', u'crid7990924609', 399058),
('2013-01-04', u'crid9904536154', 385260),
('2013-01-04', u'crid2557171372', 78873)
]
Issue is the second col with dup id's but different counts. I need to have a list that will roll up the counts so the list looks like this. Is there a group by cluase in python?
list=[
('2013-01-04', u'crid9904536154', 1104937),
('2013-01-04', u'crid7990924609', 976410),
('2013-01-04', u'crid2557171372', 78874)
]
Upvotes: 1
Views: 144
Reputation: 80346
A minimalist way to do it:
from pandas import *
a = [('2013-01-04', u'crid2557171372', 1),
('2013-01-04', u'crid9904536154', 719677),
('2013-01-04', u'crid7990924609', 577352),
('2013-01-04', u'crid7990924609', 399058),
('2013-01-04', u'crid9904536154', 385260),
('2013-01-04', u'crid2557171372', 78873)]
DataFrame(a).groupby([0,1]).sum().reset_index()
out:
0 1 2
0 2013-01-04 crid2557171372 78874
1 2013-01-04 crid7990924609 976410
2 2013-01-04 crid9904536154 1104937
Upvotes: 0
Reputation: 174614
The "long" way to it:
>>> from collections import defaultdict
>>> d = defaultdict(int)
>>> r = defaultdict(list)
>>> for i in l:
... d[i[1]] += i[2]
... r[i[0]].append(d)
...
>>> results = []
>>> for i,v in r.iteritems():
... for k in v[0]:
... results.append((i,k,v[0][k]))
...
>>> results
[('2013-01-04', u'crid9904536154', 1104937),
('2013-01-04', u'crid2557171372', 78874),
('2013-01-04', u'crid7990924609', 976410)]
Upvotes: 0
Reputation: 104682
I don't think there's any built-in tool that will do exactly what you want out of the box. However, it's pretty easy to roll your own using a defaultdict
from the collections
module:
from collections import defaultdict
counts = defaultdict(int)
for date, crid, count in lst:
counts[(date, crid)] += count
new_lst = [(date, crid, count) for (date, crid), count in counts.items()]
This requires only linear running time, so if your data set is large, it may be better than a groupby
implementation, which requires an O(log n)
running time sort.
Upvotes: 1
Reputation: 212825
Let's name your list a
and not list
(list
is a very useful function in Python and we don't want to mask it):
import itertools as it
a = [('2013-01-04', u'crid2557171372', 1),
('2013-01-04', u'crid9904536154', 719677),
('2013-01-04', u'crid7990924609', 577352),
('2013-01-04', u'crid7990924609', 399058),
('2013-01-04', u'crid9904536154', 385260),
('2013-01-04', u'crid2557171372', 78873)]
b = []
for k,v in it.groupby(sorted(a, key=lambda x: x[:2]), key=lambda x: x[:2]):
b.append(k + (sum(x[2] for x in v),))
b
is now:
[('2013-01-04', u'crid2557171372', 78874),
('2013-01-04', u'crid7990924609', 976410),
('2013-01-04', u'crid9904536154', 1104937)]
Upvotes: 6