layces
layces

Reputation: 161

Getting significance level, alpha, from KS test results?

I am trying to find the significance level/alpha level (to eventually get the confidence level) of my Kolmogorov-Smirnov test results and I feel like I'm going crazy because this doesn't seem explained well enough anywhere (in a way that I understand.)

I have sample data that I want to see if it comes from one of four probability distribution functions: Cauchy, Gaussian, Students t, and Laplace. (I am not doing a two-sample test.)

Here is sample code for Cauchy:

### Cauchy Distribution Function
data = [-1.058, 1.326, -4.045, 1.466, -3.069, 0.1747, 0.6305, 5.194, 0.1024, 1.376, -5.989, 1.024, 2.252, -1.451, -5.041, 1.542, -3.224, 1.389, -2.339, 4.073, -1.336, 1.081, -2.573, 3.788, 2.26, -0.6905, 0.9064, -0.7214, -0.3471, -1.152, 1.904, 2.082, -2.471, 0.6434, -1.709, -1.125, -1.607, -1.059, -1.238, 6.042, 0.08664, 2.69, 1.013, -0.7654, 2.552, 0.7851, 0.5365, 4.351, 0.9444, -2.056, 0.9638, -2.64, 1.165, -1.103, -1.624, -1.082, 3.615, 1.709, 2.945, -5.029, -3.57, 0.6126, -2.88, 0.4868, 0.4222, -0.2062, -1.337, -0.326, -2.784, 6.724, -0.1316, 4.681, 6.839, -1.987, -5.372, 1.522, -2.347, 0.4531, -1.154, -3.631, 0.426, -4.271, 1.687, -1.612, -1.438, 0.8777, 0.06759, 0.6114, -1.296, 0.07865, -1.104, -1.454, -1.62, -1.755, 0.7868, -3.312, 1.054, -2.183, -7.066, -0.04661, 1.612, 1.441, -1.768, -0.2443, -0.7033, -1.16, 0.2529, 0.2441, -1.962, 0.568, 1.568, 8.385, 0.7192, -1.084, 0.9035, 3.376, -0.7172, -0.1221, 3.267, 0.4064, -0.4894, -2.001, 1.63, -2.891, 0.6244, 2.381, -1.037, -1.705, -0.5223, -0.2912, 1.77, -3.792, 0.1716, 4.121, -0.9119, -0.1166, 5.694, -5.904, 0.5485, -2.788, 2.582, -1.553, 1.95, 3.886, 1.066, -0.475, 0.5701, -0.9367, -2.728, 4.588, -5.544, 1.373, 1.807, 2.919, 0.8946, 0.6329, -1.34, -0.6154, 4.005, 0.204, -1.201, -4.912, -4.766, 0.0554, 3.484, -2.819, -5.131, 2.108, -1.037, 1.603, 2.027, 0.3066, -0.3446, -1.833, -2.54, 2.828, 4.763, 0.9926, 2.504, -1.258, 0.4298, 2.536, -1.214, -3.932, 1.536, 0.03379, -3.839, 4.788, 0.04021, -0.2701, -2.139, 0.1339, 1.795, -2.12, 5.558, 0.8838, 1.895, 0.1073, 2.011, -1.267, -1.08, -1.12, -1.916, 1.524, -1.883, 5.348, 0.115, -1.059, -0.4772, 1.02, -0.4057, 1.822, 4.011, -3.246, -7.868, 2.445, 2.271, 0.5377, 0.2612, 0.7397, -1.059, 1.177, 2.706, -4.805, -0.7552, -4.43, -0.4607, 1.536, -4.653, -0.5952, 0.8115, -0.4434, 1.042, 1.179, -0.1524, 0.2753, -1.986, -2.377, -1.21, 2.543, -2.632, -2.037, 4.011, 1.98, -2.589, -4.9, 1.671, -0.2153, -6.109, 2.497]
def C(data):
    stuff = []
    # vary gamma
    for scale in xrange(1, 101, 1):
        ks_statistic, pvalue = ss.kstest(data, "cauchy", args=(scale,))
        stuff.append((ks_statistic, pvalue, scale))
    bestks = min(c[0] for c in stuff)
    bestrow = [row for row in stuff if row[0] == bestks]
    return bestrow

I am trying to fit this function to my data, and to return the scale parameter (gamma) that corresponds to the highest probability of being fit with a Cauchy Distribution. The corresponding ks-statistic and p-value also get returned. I thought that this would be done by finding the minimum ks-statistic, which would be the curve that yields the smallest distance between any given data point and distribution-curve point. I realize that I need, though, to find "alpha" so that I can find my probability that the sample data is from a Cauchy Distribution, with the specified scale/gamma value I found.

I have referenced many sources trying to explain how to find "alpha", but I have no clue how to do this in my code.

Thank you for any help and insight!

Upvotes: 1

Views: 846

Answers (1)

Bill Bell
Bill Bell

Reputation: 21663

I think this question is actually outside the range of SO because it involves statistics. You would probably be better to answer on, say, Cross Validation. However, let me offer one or two remarks.

The K-S is used for testing whether a given set of data has arisen from a given, fully specified distribution function. (Even for this purpose it might not be optimal.) It's not intended, as far as I know, as a measure of fit amongst alternatives.

To make inferences about probabilities one must have a viable probability model for the data in the first place. In this case, what is the space of alternatives and how are probabilities assigned to them under the null and alternative hypotheses?

Now, to get that unhelpful comment that I offered. Thanks for being so tactful about it! This is what I was trying to express.

You try scales from 1 to 100 in unit steps. I wanted to point out that scales less than one produce curious results. Now I see some close fits, which is especially true when p-values are considered; there's nothing to tell them apart from that for scale=2. Here's a plot.

various fits

Each triple gives (scale, K-S, p).

The main thing might be, what do you want from your data?

Upvotes: 1

Related Questions