MPizzotti
MPizzotti

Reputation: 81

How to group-by until different value in Pandas?

After I get all the data I need inside df_base (I will not include it for sake of simplicity), I want to return df_product_final with columns:

For the first 2 columns it isn't a problem because I just copy the columns from df_base and paste them inside df_product_final.

For SpeedAvg I need to insert into df_product_final the average speed for that product until a new product shows up inside the column Product.

My code:

    df_product_final['Product'] = df_product_total['Product']
    df_product_final['Speed'] = df_base['production'] / df_base['time_production']
    df_product_final=df_product_final.fillna(0)     
    df_product_final['SpeedAvg'] = df_product_final["Speed"].groupby(df_product_final['Product']).mean()     

    df_product_final['newindex'] = df_base['date_key']+df_base['hour']+df_base['minute']
    df_product_final['newindex'] = pd.to_datetime(df_product_final['newindex'], utc=1, format = "%Y%m%d%H%M%S")
    df_product_final.set_index('newindex',inplace=True)
    df_product_final=df_product_final.fillna(0)

df_product_final:

newindex                  Product        Speed   SpeedAvg 
                                             
2020-10-15 22:00:00+00:00        0    0.000000  52.944285
2020-10-15 23:00:00+00:00        0    0.000000   0.000000
2020-10-16 00:00:00+00:00        0    0.000000   0.000000
2020-10-16 01:00:00+00:00        0    0.000000   0.000000
2020-10-16 02:00:00+00:00        0    0.000000   0.000000
...
2020-10-16 20:00:00+00:00        0    154.000000   0.000000
2020-10-16 21:00:00+00:00        0    150.000000   0.000000

I would like to get this result instead:

newindex                  Product        Speed   SpeedAvg 
                                             
2020-10-15 22:00:00+00:00        0    0.000000  52.944285
2020-10-15 23:00:00+00:00        0    0.000000  52.944285
2020-10-16 00:00:00+00:00        0    0.000000  52.944285
2020-10-16 01:00:00+00:00        0    0.000000  52.944285
...

2020-10-16 20:00:00+00:00        0    154.000000   52.944285
2020-10-16 21:00:00+00:00        0    0.000000   52.944285

To make things ever more complicated there could be the same product, but separated for more than a hour. In that case my SpeedAvg depends on these new value and not from the previous values.

example:

                           Product       Speed   SpeedAvg
newindex                                                 
2020-10-15 22:00:00+00:00        0    0.000000  52.944285
2020-10-15 23:00:00+00:00        0    0.000000  52.944285
2020-10-16 00:00:00+00:00        0    0.000000  52.944285
2020-10-16 01:00:00+00:00        0    0.000000  52.944285
2020-10-16 02:00:00+00:00        1    10.000000  10.000000
2020-10-16 03:00:00+00:00        1    10.000000  10.000000
2020-10-16 04:00:00+00:00        1    10.000000  10.000000
2020-10-16 05:00:00+00:00        1    10.000000  10.000000
2020-10-16 06:00:00+00:00        1    10.000000  10.000000
2020-10-16 07:00:00+00:00        0    0.000000   31.500000
2020-10-16 08:00:00+00:00        0    0.000000   31.500000
2020-10-16 16:00:00+00:00        0  183.000000   31.500000
2020-10-16 17:00:00+00:00        0   69.000000   31.500000
2020-10-16 18:00:00+00:00        0    0.000000   31.500000
2020-10-16 19:00:00+00:00        0    0.000000   31.500000
2020-10-16 20:00:00+00:00        0    0.000000   31.500000
2020-10-16 21:00:00+00:00        0    0.000000   31.500000

I'm sorry in advance if I wasn't very comprehensive and I'll give every bit of information necessary to solve this problem.

Upvotes: 2

Views: 143

Answers (2)

MPizzotti
MPizzotti

Reputation: 81

i think that i found an easier solution to solve my problem:

starting from an empty dictionary I'm inserting all the keys of df_base inside of it, like this:

    product_keys = {}
    product_keys = df_base['product_key'].drop_duplicates().reset_index(inplace=False, drop=True).to_dict()

the resulting dictionary will look something like:

 {0: 2,
  1: 1,
  2: 31
 }

after this step using df.apply() i can iterate every row of the dataframe, changing the row value of the product key with the key of the dictionary just made:

 df_product_final['Product'] = df_base['product_key']
 df_product_final.apply(
                                self.keys_from_value,
                                dict = product_keys,
                                axis='columns',
                                raw = False,
                                result_type='broadcast',
                            )

self.keys_from_value:

 def keys_from_value(self, row, dict):
    if row is None:
        return
    else:
        row['Product'] = list(dict.keys())[list(dict.values()).index(row['Product'])]
        return row

the last step is all about calculating and inserting the correct SpeedAvg inside the dataframe (it's quite easy: the first loop is for obtaining the column group_id, based on the just modified rows; the second loop instead is inserting the SpeedAvg based on the group_id):

 gid = 0
for i, row in df_base.iterrows():
    if row['diff'] != 0:
        gid += 1
    df_base.at[i,'group_id'] = gid

avg = df_product_final["Speed"].groupby(df_base['group_id']).mean()
#avg is a Pandas Series of all the SpeedAvg based on their position relative to #the list
for i, row in df_product_final.iterrows():
    
    for row_avg in avg.index.values.tolist():
        if row.at['group_id'] == row_avg:   
            df_product_final.at[i,'SpeedAvg'] = avg[row_avg]

this is my resulting dataframe (df_product_final) after these steps:

                           Product       Speed    SpeedAvg
newindex                                                  
2020-10-20 09:00:00+00:00        0    0.000000    0.000000
2020-10-20 09:00:00+00:00        1    0.000000  104.528338
2020-10-20 10:00:00+00:00        1    0.000000  104.528338
2020-10-20 11:00:00+00:00        1    0.000000  104.528338
2020-10-20 12:00:00+00:00        1   68.375000  104.528338
2020-10-20 13:00:00+00:00        1  188.074074  104.528338
2020-10-20 14:00:00+00:00        1  172.192982  104.528338
2020-10-20 15:00:00+00:00        1  162.553571  104.528338
2020-10-20 16:00:00+00:00        1  178.867925  104.528338
2020-10-20 17:00:00+00:00        1  181.844828  104.528338
2020-10-20 18:00:00+00:00        1   93.375000  104.528338
2020-10-19 20:00:00+00:00        0    0.000000    0.000000
2020-10-19 21:00:00+00:00        0    0.000000    0.000000
2020-10-19 22:00:00+00:00        0    0.000000    0.000000
2020-10-19 23:00:00+00:00        0    0.000000    0.000000
2020-10-20 00:00:00+00:00        0    0.000000    0.000000
2020-10-20 01:00:00+00:00        0    0.000000    0.000000
2020-10-20 02:00:00+00:00        0    0.000000    0.000000
2020-10-20 03:00:00+00:00        0    0.000000    0.000000
2020-10-20 04:00:00+00:00        0    0.000000    0.000000
2020-10-20 05:00:00+00:00        0    0.000000    0.000000
2020-10-20 06:00:00+00:00        0    0.000000    0.000000
2020-10-20 07:00:00+00:00        0    0.000000    0.000000
2020-10-20 08:00:00+00:00        0    0.000000    0.000000
2020-10-20 09:00:00+00:00        2    0.000000   95.025762
2020-10-20 10:00:00+00:00        2    0.000000   95.025762
2020-10-20 11:00:00+00:00        2    0.000000   95.025762
2020-10-20 12:00:00+00:00        2   68.375000   95.025762
2020-10-20 13:00:00+00:00        2  188.074074   95.025762
2020-10-20 14:00:00+00:00        2  172.192982   95.025762
2020-10-20 15:00:00+00:00        2  162.553571   95.025762
2020-10-20 16:00:00+00:00        2  178.867925   95.025762
2020-10-20 17:00:00+00:00        2  181.844828   95.025762
2020-10-20 18:00:00+00:00        2   93.375000   95.025762
2020-10-20 19:00:00+00:00        2    0.000000   95.025762

Upvotes: 0

Andrew Pye
Andrew Pye

Reputation: 632

Found another solution that does use group by. Lmk if this works for you.

def _mean(df):
    df['SpeedAvg'] = df['Speed'].mean()
    return df
df_product_final = df_product_final.groupby(df['Product'].ne(df['Product'].shift()).cumsum()).apply(_mean)     

adapted from an answer to this post

Upvotes: 1

Related Questions