Reputation: 3623
I have a pandas dataframe which looks like this:
Input dataframe:
A B C
0 m h c
1 l c m
2 h m l
3 c l h
4 m c m
I want to replace each occurrence of each l,m,h,and c values with floating point numbers in a given range. The range of values for each string is as follows:
Range:
l: 0.0 - 0.25
m: 0.25 - 0.5
h: 0.5 - 0.75
c: 0.75 - 1.0
Each occurrence should have a value in the given range but it should not repeat. The sample output dataframe should look like this after transformation:
Output dataframe:
A B C
0 0.31 0.51 0.76
1 0.12 0.56 0.28
2 0.61 0.35 0.21
3 0.8 0.16 0.71
4 0.46 0.72 0.37
I have tried one approach using transform
. But its not fully working as values are repeated in the columns:
def _foo(col):
w = {'l': np.random.uniform(0.0,0.25),
'm':np.random.uniform(0.25,0.5),
'h': np.random.uniform(0.5,0.75),
'c':np.random.uniform(0.75,1.0)}
col = col.replace(w)
return col
df = df.transform(_foo)
If I use apply
method then also the same problem will happen and values are repeated along the rows. It also doesnt have good performance as the actual dataframe has 50-60 thousand rows. So the apply
will run for that many times.
def _bar(row):
w = {'l': np.random.uniform(0.0,0.25),
'm':np.random.uniform(0.25,0.5),
'h': np.random.uniform(0.5,0.75),
'c':np.random.uniform(0.75,1.0)}
row= row.replace(w)
return row
df = df.apply(_bar, axis=1)
Any suggestions on how to do this efficiently in pandas?
Upvotes: 2
Views: 80
Reputation: 88226
Here's a vectorized approach aimed at performance:
def map_by_val(df, l):
# dictionary to map dataframe values to index
d = {j:i for i,j in enumerate(l)}
# replace using dictionary
a = df.replace(d).to_numpy()
# since the ranges are a sequence, we can create a
# linspace, and divide in 10 bins each range
rep = np.linspace(0.0, 1.0, 40).reshape(4,-1)
# random integer indexing in each rows
ix = np.random.randint(0,rep.shape[1],a.shape)
# advanced indexing of the array using random integers per row
out = rep[a.ravel(), ix.ravel()].reshape(a.shape).round(2)
return pd.DataFrame(out)
l = ['l','m','h','c']
map_by_val(df, l)
0 1 2
0 0.49 0.74 0.87
1 0.23 0.90 0.49
2 0.67 0.49 0.18
3 0.79 0.21 0.56
4 0.46 0.87 0.36
Benchmark
The object dtype
unfortunately limits the performance of the vectorized method, since initially DataFrame.replace
is called to map the values using a dictionary. Both this answer and the stack+groupby
answer perform quite similarly:
l = ['l','m','h','c']
ranges = {'l': (0,0.25),
'm': (0.25, 0.5),
'h': (0.5,0.75),
'c':(0.75,1)}
def get_rand(x):
lower, upper = ranges[x.iloc[0]]
return np.random.uniform(lower, upper, len(x))
def stack_groupby(df):
s = df.stack()
return s.groupby(s).transform(get_rand).unstack()
plt.figure(figsize=(12,6))
perfplot.show(
setup=lambda n: pd.concat([df]*n, axis=0).reset_index(drop=True),
kernels=[
lambda s: s.applymap(lambda x : np.random.uniform(*ranges[x],1)[0]),
lambda s: map_by_val(s, l),
lambda s: stack_groupby(s)
],
labels=['applymap', 'map_by_val', 'stack_groupby'],
n_range=[2**k for k in range(0, 17)],
xlabel='N',
equality_check=None
)
Upvotes: 3
Reputation: 323226
May try
out = df.applymap(lambda x : np.random.uniform(*ranges[x],1)[0])
A B C
0 0.399545 0.592302 0.862708
1 0.135859 0.873516 0.381962
2 0.665365 0.410010 0.127253
3 0.936032 0.241266 0.686508
4 0.273130 0.839988 0.391465
Upvotes: 1
Reputation: 150735
Let's try:
s = df.stack()
ranges = {'l': (0,0.25),
'm': (0.25, 0.5),
'h': (0.5,0.75),
'c':(0.75,1)}
def get_rand(x):
lower, upper = ranges[x.iloc[0]]
return np.random.uniform(lower, upper, len(x))
s.groupby(s).transform(get_rand).unstack()
Output:
A B C
0 0.351150 0.673156 0.829484
1 0.095481 0.836520 0.258559
2 0.599817 0.282766 0.048788
3 0.851617 0.010585 0.501335
4 0.422449 0.997759 0.287950
Upvotes: 1