Reputation: 513
I'd like to know if there's a way to find the location (column and row index) of the highest value in a dataframe. So if for example my dataframe looks like this:
A B C D E
0 100 9 1 12 6
1 80 10 67 15 91
2 20 67 1 56 23
3 12 51 5 10 58
4 73 28 72 25 1
How do I get a result that looks like this: [0, 'A']
using Pandas?
Upvotes: 19
Views: 14694
Reputation: 1094
simple, fast, one liner:
loc = [df.max(axis=1).idxmax(), df.max().idxmax()]
(For large data frames, .stack() can be quite slow.)
Upvotes: 1
Reputation: 1
print('Max value:', df.stack().max())
print('Parameters :', df.stack().idxmax())
This is the best way imho.
Upvotes: 0
Reputation: 85432
np.argmax
NumPy's argmax
can be helpful:
>>> df.stack().index[np.argmax(df.values)]
(0, 'A')
df.values
is a two-dimensional NumPy array:
>>> df.values
array([[100, 9, 1, 12, 6],
[ 80, 10, 67, 15, 91],
[ 20, 67, 1, 56, 23],
[ 12, 51, 5, 10, 58],
[ 73, 28, 72, 25, 1]])
argmax
gives you the index for the maximum value for the "flattened" array:
>>> np.argmax(df.values)
0
Now, you can use this index to find the row-column location on the stacked dataframe:
>>> df.stack().index[0]
(0, 'A')
If you need it fast, do as few steps as possible.
Working only on the NumPy array to find the indices np.argmax
seems best:
v = df.values
i, j = [x[0] for x in np.unravel_index([np.argmax(v)], v.shape)]
[df.index[i], df.columns[j]]
Result:
[0, 'A']
Timing works best for lareg data frames:
df = pd.DataFrame(data=np.arange(int(1e6)).reshape(-1,5), columns=list('ABCDE'))
Sorted slowest to fastest:
%timeit df.mask(~(df==df.max().max())).stack().index.tolist()
33.4 ms ± 982 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit list(df.stack().idxmax())
17.1 ms ± 139 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit df.stack().index[np.argmax(df.values)]
14.8 ms ± 392 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%%timeit
i,j = np.where(df.values == df.values.max())
list((df.index[i].values.tolist()[0],df.columns[j].values.tolist()[0]))
4.45 ms ± 84.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%%timeit
v = df.values
i, j = [x[0] for x in np.unravel_index([np.argmax(v)], v.shape)]
[df.index[i], df.columns[j]]
499 µs ± 12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
d = {'name': ['Mask', 'Stack-idmax', 'Stack-argmax', 'Where', 'Argmax-unravel_index'],
'time': [33.4, 17.1, 14.8, 4.45, 499],
'unit': ['ms', 'ms', 'ms', 'ms', 'µs']}
timings = pd.DataFrame(d)
timings['seconds'] = timings.time * timings.unit.map({'ms': 1e-3, 'µs': 1e-6})
timings['factor slower'] = timings.seconds / timings.seconds.min()
timings.sort_values('factor slower')
Output:
name time unit seconds factor slower
4 Argmax-unravel_index 499.00 µs 0.000499 1.000000
3 Where 4.45 ms 0.004450 8.917836
2 Stack-argmax 14.80 ms 0.014800 29.659319
1 Stack-idmax 17.10 ms 0.017100 34.268537
0 Mask 33.40 ms 0.033400 66.933868
So the "Argmax-unravel_index" version seems to be one to nearly two orders of magnitude faster for large data frames, i.e. where often speeds matters most.
Upvotes: 26
Reputation: 153460
In my opinion for larger datasets, stack() becomes inefficient, let's use np.where
to return index positions:
i,j = np.where(df.values == df.values.max())
list((df.index[i].values.tolist()[0],df.columns[j].values.tolist()[0]))
Output:
[0, 'A']
df = pd.DataFrame(data=np.arange(10000).reshape(-1,5), columns=list('ABCDE'))
> %%timeit i,j = np.where(df.values == df.values.max())
> list((df.index[i].values.tolist()[0],df.columns[j].values.tolist()[0]))
1000 loops, best of 3: 364 µs per loop
> %timeit df.mask(~(df==df.max().max())).stack().index.tolist()
100 loops, best of 3: 7.68 ms per loop
> %timeit df.stack().index[np.argmax(df.values)`]
10 loops, best of 3: 50.5 ms per loop
> %timeit list(df.stack().idxmax())
1000 loops, best of 3: 1.58 ms per loop
Even larger dataframe:
df = pd.DataFrame(data=np.arange(100000).reshape(-1,5), columns=list('ABCDE'))
Respectively:
1000 loops, best of 3: 1.62 ms per loop
10 loops, best of 3: 18.2 ms per loop
100 loops, best of 3: 5.69 ms per loop
100 loops, best of 3: 6.64 ms per loop
Upvotes: 2
Reputation: 862451
Use stack
for Series
with MultiIndex
and idxmax
for index of max value:
print (df.stack().idxmax())
(0, 'A')
print (list(df.stack().idxmax()))
[0, 'A']
Detail:
print (df.stack())
0 A 100
B 9
C 1
D 12
E 6
1 A 80
B 10
C 67
D 15
E 91
2 A 20
B 67
C 1
D 56
E 23
3 A 12
B 51
C 5
D 10
E 58
4 A 73
B 28
C 72
D 25
E 1
dtype: int64
Upvotes: 13
Reputation: 323226
mask
+ max
df.mask(~(df==df.max().max())).stack().index.tolist()
Out[17]: [(0, 'A')]
Upvotes: 2
Reputation: 5660
This should work:
def max_df(df):
m = None
p = None
for idx, item in enumerate(df.idxmax()):
c = df.columns[item]
val = df[c][idx]
if m is None or val > m:
m = val
p = idx, c
return p
This uses the idxmax function, then compares all of the values returned by it.
Example usage:
>>> df
A B
0 100 9
1 90 8
>>> max_df(df)
(0, 'A')
Here's a one-liner (for fun):
def max_df2(df):
return max((df[df.columns[item]][idx], idx, df.columns[item]) for idx, item in enumerate(df.idxmax()))[1:]
Upvotes: 1