Reputation: 14075
I have two (larger than the toy example) Series e.g.:
s1 = pd.read_json('{"count":{"1614470400000":4,"1617148800000":0,"1619740800000":0,"1622419200000":4,"1625011200000":4,"1627689600000":5,"1630368000000":0,"1632960000000":8,"1635638400000":2}}')['count']
s2 = pd.read_json('{"count":{"1625011200000":1}}')['count']
What would be an efficient way to add them? I can imagine creating a join, then adding ignoring nans, but I think this should be slower than some operation that would merge sort them?
Here is the desired result for the toy example:
Upvotes: 1
Views: 665
Reputation: 323236
Try with Series.add(...) using the parameter fill_value=0
out = s1.add(s2,fill_value=0)
count
2019-06-30 1.0
2021-02-28 4.0
2021-03-31 0.0
2021-04-30 0.0
2021-05-31 4.0
2021-06-30 4.0
2021-07-31 5.0
2021-08-31 0.0
2021-09-30 8.0
2021-10-31 2.0
Upvotes: 1
Reputation: 5648
Below works but I really don't know how you are adding june 30 2019 and june 30 2021 together to get 5. I had to change 2019 to 2021 to make it work.
s3 = pd.concat([s1,s2])
s3.groupby(level=0).sum()
Upvotes: 1