Iris
Iris

Reputation: 23

Speed up loop over each row for big dataset in Python

I want to process a big dataset by assigning values to a new column according to other column values (two or three more columns). I have the Python code below.

My dataset contains of 17 million data records. It takes more than 40 hours to run the script. I am new to Python, and have only little experience with big data.

Could someone help me speed up the script runtime?

Here is the sample of the data set:

 PId    hZ  tId tPurp   ps  oZ  dZ  oT  dT
0   1   50  1040    32  762 748 10.5    12.5
0   1   50  1040    16  748 81  12.5    12.5
0   1   50  1040    2048    81  1   12.5    12.5
0   1   50  1040    1040    1   762 9.5 9.5
1   1   10  320 320 1   35  17.5    17.5
1   1   10  320 2048    35  1   19.5    19.5
2   1   50  1152    1152    297 102 11.5    12
2   1   50  1152    2048    102 1   12  12
2   1   50  1152    32  1   297 11.5    11.5
3   1   1   2   64  737 184 14  18
3   1   1   2   128 184 713 14  14
3   1   1   2   2048    184 1   18  18
3   1   1   2   2   1   737 9   9
4   1   1   2   2   1   856 9   9
4   1   1   2   2048    296 1   18  18
4   1   1   2   16  856 296 17  18
8   1   50  1056    16  97  7   15  15.5
8   1   50  1056    32  7   816 15.5    1
8   1   50  1056    2048    816 1   1   1
8   1   50  1056    1056    1   97  12  12

and below is the Python code

import pandas as pd 
import numpy as np
df_test = pd.read_csv("C:/users/test.csv")
df_test.sort_values(by=['PId','tId','oT','dT'],inplace=True)


ls2t = df_test.groupby(['PId','tId']).nth(-2)

ls2t.reset_index(level=(0,1),inplace=True)


ls2tps=ls2t[['PId','tId','ps']]

ls2tps=ls2tps.rename(columns = {'ps':'ls2ps'})

df_lst = pd.merge(df_test,
                 ls2tps,
                 on=['PId','tId'],
                 how='left')

for index,row in df_lst.iterrows():
    if df_lst.loc[index,'oZ']==df_lst.loc[index,'hZ'] and df_lst.loc[index,'ps']==2: 
       df_lst.loc[index,'d'] = 'A'
    elif df_lst.loc[index,'oZ']==df_lst.loc[index,'hZ'] and df_lst.loc[index,'ps']!=2:
         df_lst.loc[index,'d']='B'
    elif df_lst.loc[index,'ps']==2048 and (df_lst.loc[index,'ls2ps']==2 or df_lst.loc[index,'ls2ps']==514):
        df_lst.loc[index,'d']='A'
    elif df_lst.loc[index,'ps']==2048 and (df_lst.loc[index,'ls2ps']!=2 and df_lst.loc[index,'ls2ps']!=514):
        df_lst.loc[index,'d']='B'
    else:
        df_lst.loc[index,'d']='C'

od_aggpurp = df_lst.groupby(['oZ','dZ','d']).size().reset_index(name='counts')

od_aggpurp.to_csv('C:/users/test_result.csv')

Upvotes: 0

Views: 747

Answers (1)

giuliov
giuliov

Reputation: 93

Instead of that loop you should try this:

df_lst.loc[(df_lst['oZ'] == df_lst['hZ']) & (df_lst['ps'] == 2), 'd'] = 'A'  
df_lst.loc[(df_lst['oZ'] == df_lst['hZ']) & (df_lst['ps'] != 2), 'd'] = 'B'
df_lst.loc[(df_lst['ps'] == 2048) & ((df_lst['ls2ps'] == 2) | (df_lst['ls2ps'] == 514)), 'd'] = 'A'
df_lst.loc[(df_lst['ps'] == 2048) & ((df_lst['ls2ps'] != 2) & (df_lst['ls2ps'] != 514)), 'd'] = 'B'
df_lst.loc[(df_lst['d'] != 'A') & (df_lst['d'] != 'B'), 'd'] = 'C'

Here you are selecting from df_lst (using .loc) only the rows with the requested parameters, but you are modifying only the d column.

Note that in pandas between dataframes and is &, or is | and not is ~.

If you prefer this should perform even better:

oZ_hZ = df_lst['oZ'] == df_lst['hZ']
ps_2 = df_lst['ps'] == 2

df_lst.loc[(oZ_hZ) & (ps_2), 'd'] = 'A'  
df_lst.loc[(oZ_hZ) & (~ps_2), 'd'] = 'B'

ps_2048 = df_lst['ps'] == 2048
ls2ps_2 = df_lst['ls2ps'] == 2
ls2ps_514 = df_lst['ls2ps'] == 514

df_lst.loc[(ps_2048) & ((ls2ps_2) | (ls2ps_514)), 'd'] = 'A'
df_lst.loc[(ps_2048) & ((~ls2ps_2) & (~ls2ps_514)), 'd'] = 'B'

df_lst.loc[(df_lst['d'] != 'A') & (df_lst['d'] != 'B'), 'd'] = 'C'

Upvotes: 1

Related Questions