Reputation: 21
I am working on a large pandas adatframe with about 100 million rows and 2 columns. I want to iterate over the dataframe and efficiently set a third column depending on the values of col1 and col2. This is what I am currently doing -
df[col3] = 0
for idx, row in df.iterrows():
val1 = row[col1]
val2 = row[col2]
df1 = df.loc[(df.col1 == val2) & (df.col2 == val1)]
if len(df1) > 0:
df.loc[(df.col1 == val2) & (df.col2 == val1), col3] = 1
Example:
df = pd.DataFrame({'col1':[0,1,2,3,4,11], 'col2':[10,11,12,4,3,0]})
>> df.head()
col1 col2
0 0 10
1 1 11
2 2 12
3 3 4
4 4 3
5 3 10
I want to add 'col3' such that last 2 rows of the third column are
1. Think of it as a reverse_edge column which is 1 when for each
(val1, val2) in col1, col2 there is a (val2, val1) in col1, col2
col1 col2 col3
0 0 10 0
1 1 11 0
2 2 12 0
3 3 4 1
4 4 3 1
5 11 0 0
What is the most efficient way to do this computation? It is currently taking me hours to traverse the entire dataframe.
EDIT: Think of each value in col1 and corresponding value in col2 as an edge in a graph (val1 -> val2). I want to know if a reverse edge exists or not (val2 -> val1).
Upvotes: 1
Views: 594
Reputation: 862761
Use:
df1 = pd.DataFrame(np.sort(df[['col1', 'col2']], axis=1), index=df.index)
df['col3'] = df1.duplicated(keep=False).astype(int)
print (df)
col1 col2 col3
0 0 10 0
1 1 11 0
2 2 12 0
3 3 4 1
4 4 3 1
Another solution with merge
and compare subsets, compare to 2d array
s, last use np.all
for check all True
per rows:
df2 = df.merge(df, how='left', left_on='col2', right_on='col1')
df['col3'] = ((df2[['col1_x','col2_x']].values ==
df2[['col2_y','col1_y']].values).all(axis=1).astype(int))
#pandas 0.24+
#https://stackoverflow.com/a/54508052
#df['col3'] = ((df2[['col1_x','col2_x']].to_numpy() ==
df2[['col2_y','col1_y']].to_numpy()).all(axis=1).astype(int))
print (df)
col1 col2 col3
0 0 10 0
1 1 11 0
2 2 12 0
3 3 4 1
4 4 3 1
5 11 0 0
print ((df2[['col1_x','col2_x']].values == df2[['col2_y','col1_y']].values))
[[False False]
[False True]
[False False]
[ True True]
[ True True]
[False True]]
Upvotes: 0
Reputation: 18201
Along the same lines as @Jondiedoop's answer, you can safe a bit of suffix wrangling and stick to an inner join by merging on both columns at once,
df['col3'] = df.index.isin(df.merge(df, left_on=['col1', 'col2'], right_on=['col2', 'col1'], left_index=True).index).astype(int)
For example:
In [40]: df
Out[40]:
col1 col2
0 0 10
1 1 11
2 2 12
3 3 4
4 4 3
5 11 0
6 0 10
In [41]: df['col3'] = df.index.isin(df.merge(df, left_on=['col1', 'col2'], right_on=['col2', 'col1'], left_index=True).index).astype(int)
In [42]: df
Out[42]:
col1 col2 col3
0 0 10 0
1 1 11 0
2 2 12 0
3 3 4 1
4 4 3 1
5 11 0 0
6 0 10 0
An equivalent approach would be:
df['col3'] = 0
df.loc[df.merge(df, left_on=['col1', 'col2'], right_on=['col2', 'col1'], left_index=True).index, 'col3'] = 1
Upvotes: 1
Reputation: 3353
My solution would be to merge the frame to itself (merging column 2 to column 1) and then checking if the other two columns would be identical: that would mean the reverse also exists:
df2 = df.merge(df, how='left', left_on='col2', right_on='col1')
df['rev_exists'] = (df2['col1_x'] == df2['col2_y']).astype(int)
df
# col1 col2 rev_exists
#0 0 10 0
#1 1 11 0
#2 2 12 0
#3 3 4 1
#4 4 3 1
#5 11 0 0
Upvotes: 1