jpbalarini
jpbalarini

Reputation: 1172

Generate a changelog from a pandas dataframe

I'm running a crawl every day for a number of pages, and I want to track the changes between each crawl. Basically each day I get what I need from the pages and write that to a historical table. Then for each URL, I generate a pandas data frame from that historical data. I was able to achieve the following:

                          from            to
crawl_id                                    
20190609 price              50           100
20190613 price             100           140
         vdp_url  www.url1.com  www.url2.com
20190614 vdp_url  www.url2.com  www.url1.com
20190616 vdp_url  www.url1.com  www.url3.com

And I need to generate something like this:

[{"date": "20190609", "from": 50, "to": 100, "field": "price"}, {"date": "20190613", "from": 100, "to": 140, "field": "price"},{"date": "20190613", "from": "www.url1.com", "to": "www.url2.com", "field": "vdp_url"}, {"date": "20190614", "from": "www.url2.com", "to": "www.url1.com", "field": "vdp_url"}, {"date": "20190616", "from": "www.url1.com", "to": "www.url3.com", "field": "vdp_url"}]

This is the code I'm using to generate the above dataframe:

histories_df = [{'crawl_id': '20190606', 'vdp_url': 'www.url1.com', 'price': None},
{'crawl_id': '20190607', 'vdp_url': 'www.url1.com', 'price': None},
{'crawl_id': '20190609', 'vdp_url': 'www.url1.com', 'price': 50},
{'crawl_id': '20190613', 'vdp_url': 'www.url1.com', 'price': 100},
{'crawl_id': '20190614', 'vdp_url': 'www.url2.com', 'price': 140},
{'crawl_id': '20190615', 'vdp_url': 'www.url1.com', 'price': None},
{'crawl_id': '20190616', 'vdp_url': 'www.url1.com', 'price': 140},
{'crawl_id': '20190617', 'vdp_url': 'www.url3.com', 'price': 140}]
histories_df = pd.DataFrame(histories_df)
trimmed_histories = histories_df.set_index('crawl_id')
histories_df_prev = trimmed_histories.shift(-1)
diff_bool = trimmed_histories.where(trimmed_histories.values != histories_df_prev.values).notna().stack()
difference = pd.concat([trimmed_histories.stack()[diff_bool], histories_df_prev.stack()[diff_bool]], axis=1).dropna()
difference.columns=['from', 'to']

I've been trying to achieve this for a couple of hours using plain fors, iterrows, indexes, groupby and whatever I found but with no luck.

Thanks!

Upvotes: 0

Views: 164

Answers (1)

BENY
BENY

Reputation: 323226

Ummm using to_dict

difference.rename_axis(['date','field']).reset_index().to_dict('r')
Out[128]: 
[{'date': '20190609', 'field': 'price', 'from': 50.0, 'to': 100.0},
 {'date': '20190613', 'field': 'price', 'from': 100.0, 'to': 140.0},
 {'date': '20190613',
  'field': 'vdp_url',
  'from': 'www.url1.com',
  'to': 'www.url2.com'},
 {'date': '20190614',
  'field': 'vdp_url',
  'from': 'www.url2.com',
  'to': 'www.url1.com'},
 {'date': '20190616',
  'field': 'vdp_url',
  'from': 'www.url1.com',
  'to': 'www.url3.com'}]

Upvotes: 2

Related Questions