Reputation: 81
I would like to use the backward cumulative sum function:
def _backwards_cumsum(x, length, batch_size):
upper_triangular_ones = np.float32(np.triu(np.ones((length, length))))
repeated_tri = np.float32(np.kron(np.eye(batch_size), upper_triangular_ones))
return tf.matmul(repeated_tri,
tf.reshape(x, [length, 1]))
However length is a placeholder:
length = tf.placeholder("int32" ,name = 'xx')
So every time it gets a new value and then the calculation of _backwards_cumsum begins.
Once trying to run the function, I got an error:
TypeError: 'Tensor' object cannot be interpreted as an index
The full traceback:
{
TypeError Traceback (most recent call last)
<ipython-input-561-970ae9e96aa1> in <module>()
----> 1 rewards = _backwards_cumsum(tf.reshape(tf.reshape(decays,[-1,1]) * tf.sigmoid(disc_pred_gen_ph), [-1]), _maxx, batch_size)
<ipython-input-546-5c6928fac357> in _backwards_cumsum(x, length, batch_size)
1 def _backwards_cumsum(x, length, batch_size):
2
----> 3 upper_triangular_ones = np.float32(np.triu(np.ones((length, length))))
4 repeated_tri = np.float32(np.kron(np.eye(batch_size), upper_triangular_ones))
5 return tf.matmul(repeated_tri,
/Users/onivron/anaconda/envs/tensorflow/lib/python2.7/site-packages/numpy/core/numeric.pyc in ones(shape, dtype, order)
190
191 """
--> 192 a = empty(shape, dtype, order)
193 multiarray.copyto(a, 1, casting='unsafe')
194 return a
Where _maxx is the same as length placeholder above.
Any workaround it?
Upvotes: 0
Views: 884
Reputation: 4918
The error is related to tensor object that you are unknowingly using for numpy array:length
. The best way to use numpy functionality within tensorflow is to use tf.py_func
.
# Define a new function that only depends on numpy/any non tensorflow graph object
def get_repeated_tri(length, batch_size):
upper_triangular_ones = np.float32(np.triu(np.ones((length, length))))
repeated_tri = np.float32(np.kron(np.eye(batch_size), upper_triangular_ones))
return repeated_tri
# Here length and batch size must be non tensor object
repeated_tri = tf.py_func(get_repeated_tri, [length, batch_size], tf.int32)
# there're some size mismacthes also in your code `tf.matmul`
def _backwards_cumsum(repeated_tri, x, length_, batch_size):
return tf.matmul(repeated_tri, tf.reshape(x, [length_*batch_size, -1]))
length_ = tf.placeholder(tf.int32, name='length')
# also define length, batch_size as nump constants
# x as tensorflow tensor
some_tensor_out= _backwards_cumsum(repeated_tri, x, length_, batch_size)
some_tensor_out_ = sess.run(some_tensor_out, {length_:length})
Upvotes: 1