Reputation: 3671
I am testing the performance of a machine learning algorithm, specifically how it handles missing data and what kind of performance degrades are experienced when variables are missing.
For example when 20% of variable x is missing the accuracy of the model goes down by a certain %. In order to do this I would like to simulate the missing data by replacing 20% of the rows in a dataframe column.
Is there an existing way to do this?
starting df:
d = {'var1': [1, 2, 3, 4], 'var2': [5, 6, 7, 8]}
df = pd.DataFrame(data=d)
df
var1 var2
0 1 5
1 2 6
2 3 7
3 4 8
end result: drop 50% of column 'var1' at random
df
var1 var2
0 nan 5
1 2 6
2 nan 7
3 4 8
Upvotes: 2
Views: 3461
Reputation: 33793
Reassign using the sample
method, and pandas will introduce NaN
values due to auto-alignment:
df['var1'] = df['var1'].sample(frac=0.5)
Interactively:
In [1]: import pandas as pd
...: d = {'var1': [1, 2, 3, 4], 'var2': [5, 6, 7, 8]}
...: df = pd.DataFrame(data=d)
...: df
...:
Out[1]:
var1 var2
0 1 5
1 2 6
2 3 7
3 4 8
In [2]: df['var1'] = df['var1'].sample(frac=0.5)
In [3]: df
Out[3]:
var1 var2
0 1.0 5
1 NaN 6
2 3.0 7
3 NaN 8
Upvotes: 10
Reputation: 3290
(Note: I created this before you posted your mcve. I can edit it to include your starting code.)
Here is a solution:
import pandas as pd
import numpy as np
df = pd.DataFrame({'x': np.random.random(20)})
length = len(df)
num = int(0.2*length)
idx_replace = np.random.randint(0, length-1, num)
df.loc[idx_replace, 'x'] = np.nan
print(df)
Output:
x
0 0.426642
1 NaN
2 NaN
3 0.869367
4 0.719778
5 NaN
6 0.944411
7 0.424733
8 0.246545
9 0.344444
10 0.810131
11 0.735028
12 NaN
13 0.707681
14 0.963711
15 0.420725
16 0.787127
17 0.618693
18 0.606222
19 0.022355
Upvotes: 3
Reputation: 21
https://chartio.com/resources/tutorials/how-to-check-if-any-value-is-nan-in-a-pandas-dataframe/
skip down to 'Count Missing Values in DataFrame'
df.isnull().sum().sum()
Upvotes: 0