Evan Pu
Evan Pu

Reputation: 2187

tensorflow constant with variable size

I have a variable batch size, so all of my inputs are of the form

tf.placeholder(tf.float32, shape=(None, ...)

to accept the variable batch sizes. However, how might you create a constant value with variable batch size? The issue is with this line:

log_probs = tf.constant(0.0, dtype=tf.float32, shape=[None, 1])

It is giving me an error:

TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'

I'm sure it is possible to initialize a constant tensor with variable batch size, how might I do so?

I've also tried the following:

tf.constant(0.0, dtype=tf.float32, shape=[-1, 1])

I get this error:

ValueError: Too many elements provided. Needed at most -1, but received 1

Upvotes: 17

Views: 14620

Answers (2)

pangdan
pangdan

Reputation: 61

Suppose you want to do something using log_probs. For example, you want to do power operation on a tensor v and a constant log_probs. And you want the shape of log_probs to vary with the shape of v.

v = tf.placeholder(tf.float32, shape=(None, 1)
log_probs = tf.constant(0.0, dtype=tf.float32, shape=[None, 1])
result = tf.pow(v, log_probs)

However, you cannot construct the constant log_probs. While, firstly, you can construct tf.constant just with shape =[1] log_prob = tf.constant(0.0, dtype=tf.float32, shape=[None, 1]). Then use tf.map_fn() to do pow operation for each element of v.

v = tf.placeholder(tf.float32, shape=(None, 1)
log_prob = tf.constant(0.0, dtype=tf.float32, shape=[1])
result = tf.map_fn(lambda ele : tf.pow(ele, log_prob), v)

Upvotes: 0

mrry
mrry

Reputation: 126154

A tf.constant() has fixed size and value at graph construction time, so it probably isn't the right op for your application.

If you are trying to create a tensor with a dynamic size and the same (constant) value for every element, you can use tf.fill() and tf.shape() to create an appropriately-shaped tensor. For example, to create a tensor t that has the same shape as input and the value 0.5 everywhere:

input = tf.placeholder(tf.float32, shape=(None, ...))

# `tf.shape(input)` takes the dynamic shape of `input`.
t = tf.fill(tf.shape(input), 0.5)

As Yaroslav mentions in his comment, you may also be able to use (NumPy-style) broadcasting to avoid materializing a tensor with dynamic shape. For example, if input has shape (None, 32) and t has shape (1, 32) then computing tf.mul(input, t) will broadcast t on the first dimension to match the shape of input.

Upvotes: 39

Related Questions