Reputation: 392
I have 2 Pandas Dataframes, X_ol and y_ol,with a shape of 29000 x 29 and 29000 x 21, and I'm running a nested for loop through this data to generate more data(as you'll see below). What I'm trying to achieve with this for loop is something like this:
DataFrame X_ol DataFrame y_ol
id Date c1 c2 c3 c1 c2 c3
1 2000 0 1 1 0 1 1
2 2001 1 0 1 1 0 1
3 2002 1 1 0 1 1 0
4 2003 1 1 1 1 1 1
# (New DataFrame X) # (Second New DataFrame, y)
id Date c1 c2 c3 c1 c2 c3
1 2000 0 0 1 0 1 0
1 2000 0 1 0 0 0 1
2 2001 0 0 1 1 0 0
2 2001 1 0 0 0 0 1
3 2002 0 1 0 1 0 0
3 2002 1 0 0 0 1 0
4 2003 0 1 1 1 0 0
4 2003 1 0 1 0 1 0
4 2003 1 1 0 0 0 1
so it looks at the y_ol dataframe row by row and for each cell value of 1 it creates a new row in dataframe X, with that cell switched off, and creates a new row in the y dataframe with the corresponding cell on and all other values on the same row in the y Dataframe will now be switched off. I wrote this code that does it correctly, but takes so much time. 12+ minutes producing the 2 Data Frames of 60,000 rows, are there built in pandas functions/methods to use to make this more efficient or another method entirely that takes out the for loop?
for i in range(len(y_ol)):
ab = y_ol.iloc[i].where(y_ol.iloc[i]==1)
abInd = ab[ab==1.0].index
for j in abInd:
y_tmp = deepcopy(y_ol.iloc[i:i+1, :])
y_ol[j][i] = 0
conc = pd.concat([X_ol.iloc[i:i+1,:], y_ol.iloc[i:i+1, :]], axis=1)
X = X.append(conc)
y_tmp.iloc[:, :] = 0
y_tmp[j] = 1
y = y.append(y_tmp)
y_ol[j][i] = 1
Thanks in advance
Upvotes: 1
Views: 128
Reputation: 149155
I would process the dataframes by columns where a column in y_ol contains 1, and concat the temporary dataframes obtained per each column.
Assuming
x_ol = pd.DataFrame({'id': [1, 2, 3, 4], 'Date': [2000, 2001, 2002, 2003],
'c1': [0, 1, 1, 1], 'c2': [1, 0, 1, 1], 'c3': [1, 1, 0, 1]}
y_ol = pd.DataFrame({'c1': [0, 1, 1, 1], 'c2': [1, 0, 1, 1], 'c3': [1, 1, 0, 1]})
I would build the new dataframes that way:
cols = ['c1', 'c2', 'c3']
x_new = pd.concat((x_ol[y_ol[c] == 1].assign(**{c: 0}) for c in cols)).sort_values('id')
y_new = pd.concat((y_ol[y_ol[c] == 1].assign(**{x: 1 if x == c else 0 for x in cols})
for c in cols)).sort_index()
It gives as expected
print(x_new)
id Date c1 c2 c3
0 1 2000 0 0 1
0 1 2000 0 1 0
1 2 2001 0 0 1
1 2 2001 1 0 0
2 3 2002 0 1 0
2 3 2002 1 0 0
3 4 2003 0 1 1
3 4 2003 1 0 1
3 4 2003 1 1 0
and
print(y_new)
c1 c2 c3
0 0 1 0
0 0 0 1
1 1 0 0
1 0 0 1
2 1 0 0
2 0 1 0
3 1 0 0
3 0 1 0
3 0 0 1
Upvotes: 1
Reputation: 29635
To create the new y_ol, you can use stack
to after changing the 0 to with where. Then reset_index the level 1 that is actually the name of the column in y_ol with 1 originally.
df_ = y_ol.where(y_ol.eq(1)).stack().reset_index(level=1)
print (df_)
level_1 0
0 c2 1.0
0 c3 1.0
1 c1 1.0
1 c3 1.0
2 c1 1.0
2 c2 1.0
3 c1 1.0
3 c2 1.0
3 c3 1.0
Use this column named level_1 and numpy broadcasting to compare it to the columns names of y_ol to get True/False. Change the type to int
and build the new y_ol dataframe as wanted.
y_ol_new = pd.DataFrame((df_['level_1'].to_numpy()[:, None]
== y_ol.columns.to_numpy()).astype(int),
columns=y_ol.columns)
print (y_ol_new)
c1 c2 c3
0 0 1 0
1 0 0 1
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
6 1 0 0
7 0 1 0
8 0 0 1
Now for X_ol, you can reindex
it with the index of df_ to duplicate rows. Then you just need to remove y_ol_new.
X_ol_new = X_ol.reindex(df_.index).reset_index(drop=True)
X_ol_new[y_ol_new.columns] -= y_ol_new
print (X_ol_new)
id Date c1 c2 c3
0 1 2000 0 0 1
1 1 2000 0 1 0
2 2 2001 0 0 1
3 2 2001 1 0 0
4 3 2002 0 1 0
5 3 2002 1 0 0
6 4 2003 0 1 1
7 4 2003 1 0 1
8 4 2003 1 1 0
Upvotes: 1