Reputation: 89
i have 3gb csv file and How to count duplicate URLs and percentages
is the sum to calculate the percentage?
import pandas as pd
df = pd.read_csv("/Users/url_all/auguesturl.csv",names=['URL','Count'])
dd = df.groupby(
df.columns.tolist(),
as_index=False
).size().sort_values(ascending=False).apply(lambda r: r/r.sum(),axis=1)
# percentage
# ss = df.groupby('Count').size() / df.groupby('Count').size().sum()
print(dd)
my output is:
setup.iclo.com:443 8354271
da.ds-int.net:443 7278819
I like the output like this
URL Count Percentage
googleads.g.doubl.net:443 1,314,530 2.1
s.yimg.com:443 986,275 1.6
Upvotes: 1
Views: 2481
Reputation: 862761
I think need aggregate sum
of column Count
and for new column divide by total:
df = pd.DataFrame({
'URL': ['a','a','a','a','b','b','b','c','d'],
'Count': list(range(9))
})
print (df)
URL Count
0 a 0
1 a 1
2 a 2
3 a 3
4 b 4
5 b 5
6 b 6
7 c 7
8 d 8
df1 = df.groupby('URL', as_index=False)['Count'].sum()
df1['Percentage'] = df1['Count'] / df1['Count'].sum()
df1 = df1.sort_values('Count', ascending=False)
print (df1)
URL Count Percentage
1 b 15 0.416667
3 d 8 0.222222
2 c 7 0.194444
0 a 6 0.166667
But if need count URL
values use value_counts
or GroupBy.size
:
df1 = df['URL'].value_counts().reset_index()
df1.columns = ['URL','Count']
df1['Percentage'] = df1['Count'] / df1['Count'].sum()
Alternative:
df1 = df.groupby('URL').size().reset_index(name='Count')
df1['Percentage'] = df1['Count'] / df1['Count'].sum()
df1 = df1.sort_values('Count', ascending=False)
print (df1)
URL Count Percentage
0 a 4 0.444444
1 b 3 0.333333
2 d 1 0.111111
3 c 1 0.111111
Upvotes: 2