Sahil M
Sahil M

Reputation: 1847

Fitting data points to a cumulative distribution

I am trying to fit a gamma distribution to my data points, and I can do that using code below.

import scipy.stats as ss
import numpy as np
dataPoints = np.arange(0,1000,0.2)
fit_alpha,fit_loc,fit_beta = ss.rv_continuous.fit(ss.gamma, dataPoints, floc=0)

I want to reconstruct a larger distribution using many such small gamma distributions (the larger distribution is irrelevant for the question, only justifying why I am trying to fit a cdf as opposed to a pdf).

To achieve that, I want to fit a cumulative distribution, as opposed to a pdf, to my smaller distribution data.—More precisely, I want to fit the data to only a part of the cumulative distribution.

For example, I want to fit the data only until the cumulative probability function (with a certain scale and shape) reaches 0.6.

Any thoughts on using fit() for this purpose?

Upvotes: 24

Views: 7498

Answers (1)

Azmy Rajab
Azmy Rajab

Reputation: 454

I understand that you are trying to piecewise reconstruct your cdf with several small gamma distributions each with a different scale and shape parameter capturing the 'local' regions of your distribution.

Probably makes sense if your empirical distribution is multi-modal / difficult to be summarized by one 'global' parametric distribution.

Don't know if you have specific reasons behind specifically fitting several gamma distributions, but in case your goal is to try to fit a distribution which is relatively smooth and captures your empirical cdf well perhaps you can take a look at Kernel Density Estimation. It is essentially a non-parametric way to fit a distribution to your data.

http://scikit-learn.org/stable/modules/density.html http://en.wikipedia.org/wiki/Kernel_density_estimation

For example, you can try out a gaussian kernel and change the bandwidth parameter to control how smooth your fit is. A bandwith which is too small leads to an unsmooth ("overfitted") result [high variance, low bias]. A bandwidth which is too large results in a very smooth result but with high bias.

from sklearn.neighbors.kde import KernelDensity
kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(dataPoints) 

A good way then to select a bandwidth parameter that balances bias - variance tradeoff is to use cross-validation. Essentially the high level idea is you partition your data, run analysis on the training set and 'validate' on the test set, this will prevent overfitting the data.

Fortunately, sklearn also implements a nice example of choosing the best bandwidth of a Guassian Kernel using Cross Validation which you can borrow some code from:

http://scikit-learn.org/stable/auto_examples/neighbors/plot_digits_kde_sampling.html

Hope this helps!

Upvotes: 4

Related Questions