Jonas Palačionis
Jonas Palačionis

Reputation: 4842

Get the variables that produce the highest / lowest pearson correlation while looping the variables

I am trying to achieve the following:

I have a dataframe which has many columns that contain metrics and a few dimensions such as country, device, name. Each of these 3 dimensions a have a few unique values which I use to filter the data before using pd.corr().

To demonstrate I'll use the titanic dataset.

import seaborn as sns
df_test = sns.load_dataset('titanic')

for who in df_test['who'].unique():
    for where in df_test['embark_town'].unique():
        print(df_test[(df_test['who']==who)&(df_test['embark_town']==where)].corr())

Which produces df_test['who'].nunique()*df_test['embark_town'].nunique(), 9 different pd.corr() correlations.

An example below:

         survived    pclass       age     sibsp     parch      fare  
survived    1.000000 -0.198092  0.062199 -0.046691 -0.071417  0.108706   
pclass     -0.198092  1.000000 -0.438377  0.008843 -0.015523 -0.485546   
age         0.062199 -0.438377  1.000000 -0.049317  0.077529  0.199062   
sibsp      -0.046691  0.008843 -0.049317  1.000000  0.464033  0.358680   
parch      -0.071417 -0.015523  0.077529  0.464033  1.000000  0.415207   
fare        0.108706 -0.485546  0.199062  0.358680  0.415207  1.000000   
adult_male       NaN       NaN       NaN       NaN       NaN       NaN   
alone       0.030464  0.133638 -0.022396 -0.629845 -0.506964 -0.411392 

I am trying to get data which would answer this question:

In what setup I have the highest / lowest correlation between each variable, the output could be a list, dict, df like so:

output = {'highest_corr_survived_p_class':['who = man', 'embark_town = Southampton', 0.65],
         'lowest_corr_survived_p_class':['who = man', 'embark_town = Cherbourg',-0.32],
         'highest_corr_survived_age':['who = female', 'embark_town = Cherbourg',0.75],
         'lowest_corr_survived_age':['who = man', 'embark_town = Cherbourg',-0.3]
         ...
         'lowest_corr_alone_fare':['who = man', 'embark_town = Cherbourg',-0.7]}

Where I am stuck is finding a good way to create this data and how I would place it in a df.

What I've tried:

output = {}

for who in df_test['who'].dropna().unique():
    for where in df_test['embark_town'].dropna().unique():
        output[f'{who}_{where}_corr'] =  df_test[(df_test['who']==who)&(df_test['embark_town']==where)].corr().loc['survived','pclass']

Which produces output:

{'man_Southampton_corr': -0.19809207465001574,
 'man_Cherbourg_corr': -0.2102998217386208,
 'man_Queenstown_corr': 0.06717166132798494,
 'woman_Southampton_corr': -0.5525868192717193,
 'woman_Cherbourg_corr': -0.5549942419871897,
 'woman_Queenstown_corr': -0.16896381511084563,
 'child_Southampton_corr': -0.5086941796202842,
 'child_Cherbourg_corr': -0.2390457218668788,
 'child_Queenstown_corr': nan}

And this approach does not care what is max or min correlation, which is my goal.

I am unsure how I would add all of the possible variations between the columns using loc[], or is there a better / easier way of placing everything into a df?

Upvotes: 1

Views: 221

Answers (1)

jezrael
jezrael

Reputation: 862551

You can use DataFrameGroupBy.corr with DataFrame.stack, remove 1 and -1 rows and get maximal and minimal values per groups by DataFrameGroupBy.idxmax, DataFrameGroupBy.idxmin for indices with Series.loc for select, join together by concat and last use dictionary comprehension for final dict:

import seaborn as sns
df_test = sns.load_dataset('titanic')
# print (df_test)

s = df_test.groupby(['who','embark_town']).corr().stack()
s = s[~s.isin([1, -1])]
s = (pd.concat([s.loc[s.groupby(level=[2,3]).idxmax()], 
                s.loc[s.groupby(level=[2,3]).idxmin()]], keys=('highest','lowest'))
       .sort_index(level=[3,4], sort_remaining=False))
print (s)
         who    embark_town                  
highest  child  Queenstown   age       alone     0.877346
lowest   woman  Queenstown   age       alone    -0.767493
highest  woman  Queenstown   age       fare      0.520461
lowest   child  Queenstown   age       fare     -0.877346
highest  woman  Queenstown   age       parch     0.633627
  
lowest   woman  Queenstown   survived  parch    -0.433029
highest  man    Queenstown   survived  pclass    0.067172
lowest   woman  Cherbourg    survived  pclass   -0.554994
highest  man    Queenstown   survived  sibsp     0.232685
lowest   child  Southampton  survived  sibsp    -0.692578
Length: 84, dtype: float64

output = {f'{k[0]}_corr_{k[3]}_{k[4]}':
          [f'who = {k[1]}', f'embark_town = {k[2]}',v] for k, v in s.items()}

print(output)

EDIT: For TOP3 and BOTTOM3 is possible sorting and use GroupBy.head and GroupBy.tail:

import seaborn as sns
df_test = sns.load_dataset('titanic')
# print (df_test)

s = df_test.groupby(['who','embark_town']).corr().stack()
s = s[~s.isin([1, -1])].sort_values()

s = (pd.concat([s.groupby(level=[2,3]).head(3), 
                s.groupby(level=[2,3]).tail(3)], keys=('highest','lowest'))
        .sort_index(level=[3,4], sort_remaining=False)
        )
print (s)
         who    embark_town                 
highest  woman  Queenstown   age       alone   -0.767493
                Cherbourg    age       alone   -0.073881
         man    Queenstown   age       alone   -0.069001
lowest   child  Southampton  age       alone    0.169244
                Cherbourg    age       alone    0.361780
  
highest  woman  Southampton  survived  sibsp   -0.252524
         man    Southampton  survived  sibsp   -0.046691
lowest   man    Cherbourg    survived  sibsp    0.125276
         woman  Queenstown   survived  sibsp    0.143025
         man    Queenstown   survived  sibsp    0.232685
Length: 252, dtype: float64

Upvotes: 1

Related Questions