Reputation: 2573
I have two pandas DataFrames: one containing the anonymous hashes of customer numbers (here for simplicity the hashes of the numbers 0-19)
import pandas as pd
import numpy as np
from hashlib import sha1
df_customers = pd.DataFrame( [ sha1(i).hexdigest() for i in np.arange(20)] )
df_customers.columns = ["customer"]
Now I have a second table (DataFrame) with 200 occurrences of customers picking from a selection of 20 different kinds of fruit:
fruit = ["apple", "banana", "peach", "plum", "orange", "cumquat", "raspberry", "lemon", "rubarb", "pineapple"]
pd.DataFrame( np.c_[ np.array([ sha1(i).hexdigest() for i in np.random.randint(0,20,200) ]),
np.array([ fruit[i] for i in np.random.randint(0,len(fruit),200) ]) ],
columns=("customer_id","fruit")
)
now I would like to add a column to the customer DataFrame that indicated the variety of eaten fruit -- that is the number of different fruit that have been eaten by each customer. For that I did:
variety = df_eating.groupby("customer_id")["fruit"].apply( lambda x: len(np.unique(x)))
this gives me a "Series". Now I feel there should be a straight forward way to add this back to df_customer, respecting the customer_id but here I'm quite stuck:
pd_customer["variety"] = variety
does not respect the customer id and gives NaN for every value
and functions like pd.merge()
where there is an option to merge "on" something did not do what I wanted.
Upvotes: 2
Views: 629
Reputation: 393963
If I understand what you want then you can call map
and pass the series:
In [36]:
df_customers['variety'] = df_customers['customer'].map(variety)
df_customers
Out[36]:
customer variety
0 9069ca78e7450a285173431b3e52c5c25299e473 7
1 3c585604e87f855973731fea83e21fab9392d2fc 9
2 0aaf76f425c6e0f43a36197de768e67d9e035abb 6
3 8e146c3c4e33449f95a49679795f74f7ae19ecc1 6
4 d6459ab29c7b9a9fbf0c7c15fa35faa30fbf8cc6 7
5 ddaf0ed54dfc227ce677b5c2b44e3edee7c7db77 5
6 8098e7dfb09adba3bf783794ba0db81985a814d7 6
7 2f086fc767a0dac59a38c67f409b4f74a1eab39f 8
8 a454ca483b4a66b83826d061be2859dd79ff0d6c 7
9 9db063f3b5e0adfd0d29a03db0a1c207b3740a94 6
10 eb408ddc4fa484e6befdf5954e56a2198c7a9fab 8
11 94312fc592ee3f323b3f9d8612737c507ec7f6c3 5
12 f3a56292ca640b843071c9a143404cea014f4d5c 9
13 b1197c208248d0f7ffb3e322d5ec187441dc1b26 7
14 f143c36fc53bfde11a8d122249aced46c43cc2e2 7
15 aefa2f5632d36978838bff3aabcef5ee01395729 5
16 5497b0911b3f5772723def3b360a2e654327c19b 6
17 498bcbf6cbffcc8dd2623f388d81f44cfad1014d 5
18 96760d655a51e69d67d32a5f18c23c9bfe0576cf 5
19 fe5aa6438ae9b661b033b91e9c679ad2898cbfd4 6
With regards to optimising your code you can replace this line:
variety = df_eating.groupby("customer_id")["fruit"].apply( lambda x: len(np.unique(x)))
with the equivalent:
variety = df_eating.groupby("customer_id")["fruit"].nunique()
Upvotes: 4