amartya18x
amartya18x

Reputation: 35

Choose a number from a a given pmf in theano

Say I have an array p = [ 0.27, 0.23, 0.1, 0.15, 0.2 ,0.05]. Let p be a probability mass function for a Random Variable X. Now, I am writing a theano code, where I have a p generated at each iteration and I also have n weight matrices. (Here[n = 6].)

Now, in each iteration I would want to select one of these weight matrices for further propagation. Could someone help out regarding how to write this piece of code. I am not sure I can write the exact code required to enable backpropagation(i.e. have the gradients corrected properly)

Note that all the W_is as well as the enter p are model parameters.

Edit

 W1,W2,W3,W4,W5,W6,x,eps = T.dmatrices("W1","W2","W3","W4","W5","W6","x","eps")

    b1,b2,b3,b4,b5,b6,pi = T.dcols("b1","b2","b3","b4","b5","b6","pi")

   h_encoder = T.tanh(T.dot(W1,x) + b1)

    rng = T.shared_randomstreams.RandomStreams(seed=124)

    i = rng.choice(size=(1,), a=self.num_model, p=T.nnet.softmax(pi))

    mu_encoder = T.dot(W2[i[0]*self.dimZ:(1+i[0])*self.dimZ].nonzero(),h_encoder) + b2[i[0]*self.dimZ:(1+i[0])*self.dimZ].nonzero()

    log_sigma_encoder = (0.5*(T.dot(W3[i[0]*self.dimZ:(1+i[0])*self.dimZ].nonzero(),h_encoder)))+ b3[i[0]*self.dimZ:(1+i[0])*self.dimZ].nonzero()

    z = mu_encoder + T.exp(log_sigma_encoder)*eps`

and my grad vaiables are gradvariables = [W1,W2,W3,W4,W5,b1,b2,b3,b4,b5,pi] Ignore the other variables as they are defined somewhere else. Now, I get the following error

Traceback (most recent call last): File "trainmnist_mixture.py", line 55, in encoder.createGradientFunctions()

File "/home/amartya/Variational-Autoencoder/Theano/VariationalAutoencoder_mixture.py", line 118, in createGradientFunctions derivatives = T.grad(logp,gradvariables)

File "/usr/lib/python2.7/site-packages/Theano-0.6.0-py2.7.egg/theano/gradient.py", line 543, in grad grad_dict, wrt, cost_name)

File "/usr/lib/python2.7/site-packages/Theano-0.6.0-py2.7.egg/theano/gradient.py", line 1273, in _populate_grad_dict rval = [access_grad_cache(elem) for elem in wrt]

File "/usr/lib/python2.7/site-packages/Theano-0.6.0-py2.7.egg/theano/gradient.py", line 1233, in access_grad_cache term = access_term_cache(node)[idx]

File "/usr/lib/python2.7/site-packages/Theano-0.6.0-py2.7.egg/theano/gradient.py", line 944, in access_term_cache output_grads = [access_grad_cache(var) for var in node.outputs]

File "/usr/lib/python2.7/site-packages/Theano-0.6.0-py2.7.egg/theano/gradient.py", line 1243, in access_grad_cache term.type.why_null)

theano.gradient.NullTypeGradError: tensor.grad encountered a NaN. This variable is Null because the grad method for input 0 (Subtensor{int64:int64:}.0) of the Nonzero op is mathematically undefined

Upvotes: 1

Views: 257

Answers (1)

Daniel Renshaw
Daniel Renshaw

Reputation: 34177

You can use the choice method of a RandomStreams instance. More on random numbers in Theano can be found in the documentation here and here.

Here's an example:

import numpy
import theano
import theano.tensor as tt
import theano.tensor.shared_randomstreams

n = 6
alpha = [1] * n
seed = 1
w = theano.shared(numpy.random.randn(n, 2, 2).astype(theano.config.floatX))
p = theano.shared(numpy.random.dirichlet(alpha).astype(theano.config.floatX))
rng = tt.shared_randomstreams.RandomStreams(seed=seed)
i = rng.choice(size=(1,), a=n, p=p)
f = theano.function([], [p, i, w[i]])
print f()
print f()
print f()
print f()

Upvotes: 1

Related Questions