Reputation: 389
I am trying to generate a much larger sample of data from my existing sample data. For example iris
is N = 150
and I want to rescale it to 4500 (1500 per species). An example is described in the post here https://seslezak.github.io/IrisData/. I don't want to resample
or bootstrap
I'm interested in generating new values using for example rnorm
Here is what I have tried until now.
muSepal.Length = mean(iris$Sepal.Length)
sdSepal.Length = sd(iris$Sepal.Length)
muSepal.Width= mean(iris$Sepal.Width)
sdSepal.Width = sd(iris$Sepal.Width)
N = 5000
simulated_data = data.frame(Sepal.Length = rnorm(N, muSepal.Length,sdSepal.Length),Sepal.Width =rnorm(N,muSepal.Width,sdSepal.Width))
Here I have pulled the values from the sample distribution, But I am struggling to understand how can I efficiently build this entire "new" dataset?
Upvotes: 1
Views: 1639
Reputation: 3923
As @d.b. pointed out a few hours ago you face the choice of sampling your existing data or assuming it fits some theoretical distribution like rnorm
. It is pretty clear the author of the article you are emulating chose the later. The summary of the new dataset clearly shows values that are not in the original iris
and NA
s for some setosa Petal.Width because in a large sample we're bound to go below 0 for a measurement.
Here's a quick and dirty set of code that you should be able to condition to your own data.
set.seed(2020)
library(dplyr)
testing <- iris %>%
group_by(Species) %>%
summarise_at(vars(Sepal.Length:Petal.Width), list(mean = mean,
sd = sd)) %>%
rowwise() %>%
group_by(Species) %>%
summarise(Sepal.Length = rnorm(1500,
mean = Sepal.Length_mean,
sd = Sepal.Length_sd),
Sepal.Width = rnorm(1500,
mean = Sepal.Width_mean,
sd = Sepal.Width_sd),
Petal.Length = rnorm(1500,
mean = Petal.Length_mean,
sd = Petal.Length_sd),
Petal.Width = rnorm(1500,
mean = Petal.Width_mean,
sd = Petal.Width_sd)) %>%
ungroup %>% # so we stop being rowwise
filter_at(vars(Sepal.Length:Petal.Width), ~ . > .1) # to eliminate ridiculously small or negative values
summary(testing)
#> Species Sepal.Length Sepal.Width Petal.Length
#> setosa :1368 Min. :3.784 Min. :1.719 Min. :0.8857
#> versicolor:1500 1st Qu.:5.168 1st Qu.:2.746 1st Qu.:1.6116
#> virginica :1500 Median :5.834 Median :3.014 Median :4.2998
#> Mean :5.855 Mean :3.047 Mean :3.8148
#> 3rd Qu.:6.443 3rd Qu.:3.322 3rd Qu.:5.2312
#> Max. :8.304 Max. :4.547 Max. :7.5825
#> Petal.Width
#> Min. :0.1001
#> 1st Qu.:0.3373
#> Median :1.3439
#> Mean :1.2332
#> 3rd Qu.:1.8460
#> Max. :3.0523
Someone more fluent than I can likely do a better job though pivot_longer or a custom function of avoiding the 4 repetitive calls to rnorm
. It's up to you to look for unreasonable values and to justify why rnorm
is a good fit to your data.
Adding a more complicated solution using MASS::mvrnorm
to account for the correlations that Remi mentions in his answer. Sorry too lazy to think through better code, just brute force repetition here.
library(dplyr)
# Get the covariance matrix by species
sigma.setosa <- iris %>%
filter(Species == "setosa") %>%
select(-Species) %>%
cov
sigma.versicolor <- iris %>%
filter(Species == "versicolor") %>%
select(-Species) %>%
cov
sigma.virginica <- iris %>%
filter(Species == "virginica") %>%
select(-Species) %>%
cov
# generate samples based on those covariance matricies
set.seed(2020)
setosa.rows <- MASS::mvrnorm(n = 1500,
c(mean(iris$Sepal.Length), mean(iris$Sepal.Width), mean(iris$Petal.Length), mean(iris$Petal.Width)),
sigma.setosa,
empirical = TRUE)
versicolor.rows <- MASS::mvrnorm(n = 1500,
c(mean(iris$Sepal.Length), mean(iris$Sepal.Width), mean(iris$Petal.Length), mean(iris$Petal.Width)),
sigma.versicolor,
empirical = TRUE)
virginica.rows <- MASS::mvrnorm(n = 1500,
c(mean(iris$Sepal.Length), mean(iris$Sepal.Width), mean(iris$Petal.Length), mean(iris$Petal.Width)),
sigma.virginica,
empirical = TRUE)
# convert to dataframes
setosa.df <- data.frame(setosa.rows, Species = "setosa")
versicolor.df <- data.frame(setosa.rows, Species = "versicolor")
virginica.df <- data.frame(setosa.rows, Species = "virginica")
# bind them return species to a factor
newiris <- rbind(setosa.df, versicolor.df, virginica.df)
newiris$Species <- factor(newiris$Species)
summary(newiris)
#> Sepal.Length Sepal.Width Petal.Length Petal.Width
#> Min. :4.669 Min. :1.759 Min. :3.183 Min. :0.820
#> 1st Qu.:5.598 1st Qu.:2.805 1st Qu.:3.637 1st Qu.:1.130
#> Median :5.848 Median :3.064 Median :3.761 Median :1.199
#> Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199
#> 3rd Qu.:6.083 3rd Qu.:3.306 3rd Qu.:3.878 3rd Qu.:1.267
#> Max. :6.969 Max. :4.288 Max. :4.342 Max. :1.578
#> Species
#> setosa :1500
#> versicolor:1500
#> virginica :1500
#>
#>
#>
summary(iris)
#> Sepal.Length Sepal.Width Petal.Length Petal.Width
#> Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100
#> 1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300
#> Median :5.800 Median :3.000 Median :4.350 Median :1.300
#> Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199
#> 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800
#> Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500
#> Species
#> setosa :50
#> versicolor:50
#> virginica :50
#>
#>
#>
Upvotes: 2
Reputation: 1714
Your question is quite clear and I don't know if what I will write in this post is true or not.
The easiest way to do it it is to boostrap your sample using random repetition of your observations like this :
SimIris <- iris[sample(1:150, 5000, replace = T),]
But when you present your problem I was wondering how much can we generate random observations without repetition.
The idea is to use the classical statistical framework considering a response variable Y
and a design matrix X
with independant variable. You need to find a function f
such that :
Y = f(X) + eps
When you have it, you only need to simulate a X
which not too bizare. But in fact, in your case you need to take care of dependency between variables which complicated a bit the story. We will make the assumption which is wrong that variables are independant. One field of probability theory is to take care of dependency thanks to copula.
Find a good approximation of f
;
Simulate X
thanks to basic probability theory, we suppose that each variable is independant and comes from gaussian variable. If you compute correlation and histogram you will understand that it is wrong.
library(randomForest)
data("iris")
# your model
rf <- randomForest(Species ~ ., data = iris, family = )
# you simulate X
simulate_wrong <- function(X, n){
return(rnorm(n, mean = mean(X), sd = sd(X)))
}
irisSim <- apply(iris[,-ncol(iris)], 2, simulate_wrong, n = 5000)
# your Y
SpeciesSim <- predict(rf, newdata = irisSim)
# Sanity check : we absolutly need to take care of dependency inside X variables
table(SpeciesSim)
setosa versicolor virginica
1319 2333 1348
table(iris$Species)
setosa versicolor virginica
50 50 50
We simulate a data set with fare too much versicolor
, we need to take care of the correlation structure of X
. Maybe for an edit later.
For information : correlation table :
Sepal.Length Sepal.Width Petal.Length Petal.Width
Sepal.Length 1.00 -0.12 0.87 0.82
Sepal.Width -0.12 1.00 -0.43 -0.37
Petal.Length 0.87 -0.43 1.00 0.96
Petal.Width 0.82 -0.37 0.96 1.00
Goog luck
Upvotes: 2