ilikecats
ilikecats

Reputation: 319

How to add missing column data with 0 counts on Pandas DataFrame?

I have a Pandas DataFrame that looks like this:

my frame

Here is the problem with the dataset: if there was a 0 count, that row was never created in the csv file given to me. So, for example, week 6 only has 2 entries (the counts for only 2 hours). I want week 6 to, instead, have 168 entries (since 1 week has 168 hours) where 166 of the entries will have 0 counts. So there should be rows like:

[year=2018, week=6, day of week=1, hour of day=1, count=0, unit_id=blah, unit_label=blah]

[year=2018, week=6, day of week=1, hour of day=2, count=0,unit_id=blah, unit_label=blah]

...

[year=2018, week=6, day of week=1, hour of day=23, count=1,unit_id=blah, unit_label=blah]

...

so on and so forth. From looking around, I am guessing I need to use "reindex" somehow. But I can't just directly use date ranges given the fact I want those very specific columns. Any advice appreciated.

Data as text:

{'count': {0: 5, 1: 1, 2: 1, 3: 8, 4: 1},'day_of_week': {0: 4, 1: 5, 2: 4, 3: 3, 4: 3},'hour_of_day': {0: 23, 1: 0, 2: 18, 3: 19, 4: 21},'unit_id': {0: 'bc9b8ac4-3c57-4fe1-9085-0e3d0b6233d6',1: 'bc9b8ac4-3c57-4fe1-9085-0e3d0b6233d6',2: '7a1efb1d-d4c1-47e1-9320-ff5707eae91e',3: '7a1efb1d-d4c1-47e1-9320-ff5707eae91e',4: '7a1efb1d-d4c1-47e1-9320-ff5707eae91e'},'unit_label': {0: '_debug TestPA',1: '_debug TestPA',2: '_TEMPORARILY_DISABLED_Jenn`s Favorite Destinations',3: '_TEMPORARILY_DISABLED_Jenn`s Favorite Destinations',4: '_TEMPORARILY_DISABLED_Jenn`s Favorite Destinations'},'week': {0: 29, 1: 29, 2: 46, 3: 51, 4: 51},'year': {0: 2017, 1: 2017, 2: 2015, 3: 2015, 4: 2015}}

Upvotes: 0

Views: 303

Answers (1)

sacuL
sacuL

Reputation: 51425

I believe this should work for you. It will create a dataframe with one row for each hour from your minimum date to your maximum date (so quite large!), and you'll have an entry for each hour, with count set to 0

# Start by creating a datetime column in your dataframe:
df['datetime'] = pd.to_datetime(df[['year', 'week', 'day_of_week', 'hour_of_day']]
               .apply(lambda x: '-'.join(x.astype('str')),
                      axis=1), format='%Y-%W-%w-%H')

#use reindex, to reindex hourly
new_df = (df.set_index('datetime')
          .reindex(pd.date_range(min(df.datetime), max(df.datetime), freq='H')))

# Go through and fill all your date and time column as necessary
new_df['week'] = new_df.index.week - 1
new_df['day_of_week'] = new_df.index.dayofweek + 1
new_df['year'] = new_df.index.year
new_df['hour_of_day'] = new_df.index.hour

# next, fill NaN in count with 0, and forward fill in unit id and unit label
new_df['count'].fillna(0, inplace=True)
new_df[['unit_id', 'unit_label']] = new_df[['unit_id', 'unit_label']].fillna(method='ffill')

You can then get rid of the datetime index, if you wish:

new_df.reset_index(drop=True, inplace=True)

>>> new_df.head()
   count  day_of_week  hour_of_day                               unit_id  \
0    1.0            4           18  7a1efb1d-d4c1-47e1-9320-ff5707eae91e   
1    0.0            4           19  7a1efb1d-d4c1-47e1-9320-ff5707eae91e   
2    0.0            4           20  7a1efb1d-d4c1-47e1-9320-ff5707eae91e   
3    0.0            4           21  7a1efb1d-d4c1-47e1-9320-ff5707eae91e   
4    0.0            4           22  7a1efb1d-d4c1-47e1-9320-ff5707eae91e   

                                          unit_label  week  year  
0  _TEMPORARILY_DISABLED_Jenn`s Favorite Destinat...    46  2015  
1  _TEMPORARILY_DISABLED_Jenn`s Favorite Destinat...    46  2015  
2  _TEMPORARILY_DISABLED_Jenn`s Favorite Destinat...    46  2015  
3  _TEMPORARILY_DISABLED_Jenn`s Favorite Destinat...    46  2015  
4  _TEMPORARILY_DISABLED_Jenn`s Favorite Destinat...    46  2015  

Upvotes: 1

Related Questions