Bjc51192
Bjc51192

Reputation: 327

Populating a data frame using from separate table using loc

data1={'Policy Number':['FSH1235456','FSH7643643','CHH123124','CHH123145252','CHH124124'],'State':['FL','TX','GA','TX','TX'],'TERR':[1,2,3,4,5]}

data2={'TERR':[1,2,3,4,5],'CHH':[0,.15,.65,.35,.20],'FSH':[0,.15,.25,.35,.20]}

output={'Policy Number':['FSH1235456','FSH7643643','CHH123124','CHH123145252','CHH124124'],'State':['FL','TX','GA','TX','TX'],'TERR':[1,2,3,4,5],'Test':[0,.15,0,0,0]}


df1=pd.DataFrame(data1)

df2=pd.DataFrame(data2)

df3=pd.DataFrame(output)

Test data above.

I am trying to create a new column in df1 call it df1['Test'], which contains the values of df2['FSH'] based on the following criteria:

  1. The state is 'TX'
  2. The Policy Number contains 'FSH' 3.The value of df1["Terr"] = value of df2['TERR']

View df3 for correct output.

What I tried doing was the following;

if df1.State.any()=="TX":
    if df1["Policy Number"].str.contains("FSH").any():
        for i in df["TERR"]:
            df1['% TERR']=df2.loc[[i],["FSH"]]

However, my output is riddled with NAN, as well as 1 unique correct answer.

I tried checking to ensure the correct i values were being fed into df2 via

print(df2.loc[[i],["FSH"]]

and it is printing correctly.

Any thoughts?

Upvotes: 0

Views: 164

Answers (4)

Vaishali
Vaishali

Reputation: 38415

You can use numpy where by passing conditions,

cond1 = (df1['State'] == 'TX')
cond2 = (df1['Policy Number'].str.contains('FSH'))
cond3 = (df1["TERR"] == df2['TERR'])
df1['Test'] = np.where(cond1 & cond2 & cond3, df2['FSH'], 0)

    Policy Number   State   TERR    Test
0   FSH1235456      FL      1       0.00
1   FSH7643643      TX      2       0.15
2   CHH123124       GA      3       0.00
3   CHH123145252    TX      4       0.00
4   CHH124124       TX      5       0.00

Upvotes: 1

jadelord
jadelord

Reputation: 1775

Here is your solution:

# ... initialize df1 and df2 here
df3 = df1.join(df2.FSH)  # Merge df1 and df2 into a single dataframe
df3 = df3.rename({"FSH": "TEST"}, axis=1)  # Change column name

def set_tx_fsh(row):
    if row.State == "TX" and "FSH" in row["Policy Number"]:
        return row.TEST
    else:
        return 0

df3.TEST = df3.apply(set_tx_fsh, axis=1)  # Set values in "TEST" column based on your condition

Upvotes: 0

robertwest
robertwest

Reputation: 942

Are you just trying to get the data from df2 into df1? If so, you could reshape df2 using melt and then do a merge.

df1['policy_prefix'] = df1['Policy Number'].str[:3]
df2 = df2.melt(id_vars='TERR', value_vars=['CHH', 'FSH'], 
               value_name='Test', 
               var_name='policy_prefix')
df1 = df1.merge(df2, on=['policy_prefix', 'TERR'])

if you only want this to apply to rows where the state is 'TX' then you could set the other values to null after the merge:

   import numpy as np 
   df1.loc[df1.State!='TX', 'Test'] = np.nan

Upvotes: 0

It_is_Chris
It_is_Chris

Reputation: 14113

I dont know if this is the best or fastest solution but one option is to merge your 2 dataframes then filter based on your conditions then update:

new = df1.merge(df2, on='TERR')
mask = new[((new['State']=='TX') & (new['Policy Number'].str.contains('FSH')))]

df1['Test'] = 0
df1['Test'].update(mask['FSH'])

    Policy Number   State   TERR    Test
0   FSH1235456        FL    1       0.00
1   FSH7643643        TX    2       0.15
2   CHH123124         GA    3       0.00
3   CHH123145252      TX    4       0.00
4   CHH124124         TX    5       0.00

Upvotes: 1

Related Questions