Reputation: 490
I want to generate a random float in the range [0, 1) from a one-tailed distribution that looks like this
The above is the chi-squared distribution. I can only find resources on drawing from a uniform distribution in a range, however.
Upvotes: 3
Views: 991
Reputation: 301
If you want to draw a sample of size N = 5
from a ChiSquare distribution, you can try OpenTURNS library:
import openturns as ot`
# define your distribution. Here, nu = 3. (nu is a float > 0)
distribution = ot.ChiSquare(3)
# draw a sample of size N from `distribution`
N=5
sample = distribution.getSample(N)
A complete list of distributions is available here
sample
has an OpenTURNS format but you can manipulate it as a Numpy array:
s = np.array(Sample)
print(s)
>>>array([[1.65299759],
[6.78405097],
[0.88528975],
[0.87900211],
[0.25031129]])
You can also easily plot the distribution PDF just by calling : distribution.drawPDF()
Customizations:
from openturns.viewer import View
graph = distribution.drawPDF()
title = str(distribution)[:100].split('\n')[0]
graph.setTitle(title)
View(graph, add_legend=False)
Upvotes: 1
Reputation: 50668
You could use a Beta distribution, e.g.
import numpy as np
np.random.seed(2018)
np.random.beta(2, 5, 10)
#array([ 0.18094173, 0.26192478, 0.14055507, 0.07172968, 0.11830031,
# 0.1027738 , 0.20499125, 0.23220654, 0.0251325 , 0.26324832])
Here we draw numbers from a Beta(2, 5)
distribution
The Beta distribution is a very versatile and fundamental distribution in statistics; without going into any details, by changing the parameters alpha
and beta
you can make the distribution left-skewed, right-skewed, uniform, symmetric etc. The distribution is defined on the interval [0, 1]
which is consistent with what you're after.
While the Kumaraswamy distribution certainly has more benign algebraic properties than the Beta distribution I would argue that the latter is the more fundamental distribution; for example, in Bayesian inference, the Beta distribution often enters as the conjugate prior when dealing with binomial(-like) processes.
Secondly, the mean and variance of the Beta distribution can be expressed quite simply in terms of the parameters alpha
, beta
; for example, the mean is simply given by alpha / (alpha + beta)
.
Lastly, from a computational and statistical inference point of view, fitting a Beta distribution to data is usually done in a few lines of code in Python (or R), where most Python libraries like numpy
and scipy
already include methods to deal with the Beta distribution.
Upvotes: 2
Reputation: 20080
I would leaning toward distribution which is naturally bounded on [0...1] interval (or any other [a...b] interval which could be rescaled later), like @MauritsEvers answer. Reason is, you know the distribution and could derive (or read) some interesting facts about it. If you use chi2 adn truncate it, it is unclear how to argue about properties of what you've got.
Personally I prefer Kumaraswamy distribution over Beta distribution, expressions for mean,mode, variance etc are a lot simpler.
Just install it
pip install kumaraswamy
and sample
from kumaraswamy import kumaraswamy
d = kumaraswamy(a=2.0, b=5.0)
q = d.rvs(10)
print(q)
will produce 10 numbers following magenta curve in the Wiki article.
If you don't want Beta or Kumaraswamy, there is f.e. Logit-normal distribution and quite a few others
Upvotes: 2
Reputation: 2095
Look at the numpy.random.chisquare method library.
numpy.random.chisquare(df, size=None)
>>> np.random.chisquare(2,4)
array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272])
Upvotes: 1