Reputation: 260
I got two samples from different sites. The parameter I am interested in is discrete (frequencies). I did simulations for both sites, so I know the probabilities of a random distribution for each site. Because of my simulations I know that the deviation of my parameter from its mean is not normally distributed so I went for a parametric test. I checked with one-sample Kolmogorov-Smirnov if the samples might derive from these random distributions (example data, not real):
sample1 <- rep(1:5, c(25, 12, 12, 0, 1))
rand.prob1 <- c(.51, .28, .111, .08, 0.019)
StepProb1 <- stepfun(0:4, c(0, cumsum(rand.prob1)), right = T)
dgof::ks.test(sample1, StepProb1)
sample2 <- rep(1:5, c(19, 13, 10, 5, 3))
rand.prob2 <- c(.61, .18, .14, .05, 0.02)
StepProb2 <- stepfun(0:4, c(0, cumsum(rand.prob2)), right = T)
dgof::ks.test(sample2, StepProb2)
In a next step I want to check if the samples of both sites might derive from the same distribution. Both implemetations of the KS-test (packages stats
and dgof
) issue a warning because my samples have ties:
stats::ks.test(sample1, sample2)
dgof::ks.test(sample1, sample2)
If I understand Dufour and Farhat (2001) correctly, there is a way to calculate exact p-values through tie-breaking via Monte Carlo simulations. And if I understand the package description of the dgof package correctly, its implementation of Monte Carlo simulations only works for the one-sample test.
So my question: Does anybody know how to calculate exact p-values in R for a two-sample Kolmogorov-Smirnov test applied to a discrete variable when ties exist?
Or alternatively (though not specifically related to R): If nobody knows how to do this with a tolerable workload, I would go for the uncorrected p-values and as a consequence discuss results with care. But with p-values below 0.0001. I'm actually not overly concerned about it. But what do I know... Do you think this is right or am I making a grave mistake in this case?
Thanks in advance, I already appreciate that you read until here.
Upvotes: 6
Views: 3558
Reputation: 1259
As mentioned in the comment, the function ks.boot
of package Matching implements Bootstrap Kolmogorov-Smirnov, i.e., the Monte Carlo simulation for an arbitrary number of re-samplings with the nboots
parameter. I think that will give you what you need.
Upvotes: 5
Reputation: 21507
Dont know whether you can apply KS here at all.
Kolmogorov-Smirnov is a NON-parametric test and only works for continious x and y data. I guess your sample1 and sample2 are not continuous "enough". Quoting ?stats::ks.test
If y is numeric, a two-sample test of the null hypothesis that x and y were drawn from the same continuous distribution is performed.
Also see:
Solution: Try to perform a Chi-Square Goodness-of-Fit Test
in R
you do this with ?chisq.test
.
Theory can be found for example here:
Upvotes: 3