programmer
programmer

Reputation: 577

while_loop error in Tensorflow

I tried to use while_loop in Tensorflow, but when I try to return the target output from callable in while loop, it gives me an error because the shape is increased every time.

The output should be contains (0 or 1) values based on data value (input array). If data value is large than 5 return 1 else return 0. The returned value must be added into output

This is the code::

import numpy as np
import tensorflow as tf

data = np.random.randint(10, size=(30))
data = tf.constant(data, dtype= tf.float32)

global output
output= tf.constant([], dtype= tf.float32)
i = tf.constant(0)
c = lambda i: tf.less(i, 30)


def b(i):
   i= tf.add(i,1)
   cond= tf.cond(tf.greater(data[i-1], tf.constant(5.)), lambda: tf.constant(1.0), lambda: tf.constant([0.0]))
   output =tf.expand_dims(cond, axis = i-1)
   return i, output

r,out = tf.while_loop(c, b, [i])
print(out)
sess=  tf.Session()
sess.run(out) 

The error::

r, out = tf.while_loop(c, b, [i])

ValueError: The two structures don't have the same number of elements.

First structure (1 elements): [tf.Tensor 'while/Identity:0' shape=() dtype=int32]

Second structure (2 elements): [tf.Tensor 'while/Add:0' shape=() dtype=int32, tf.Tensor 'while/ExpandDims:0' shape=unknown dtype=float32>]

I use tensorflow-1.1.3 and python-3.5

How can I change my code to gives me the target result?

EDIT::

I edit the code based on @mrry answer, but I still have an issue that the output is incorrect answer the output is numbers summation

a = tf.ones([10,4])
print(a)
a = tf.reduce_sum(a, axis = 1)
i =tf.constant(0)
c = lambda i, _:tf.less(i,10)

def Smooth(x):
   return tf.add(x,2)

summ = tf.constant(0.)
def b(i,_):
   global summ
   summ = tf.add(summ, tf.cast(Smooth(a[i]), tf.float32))
   i= tf.add(i,1)
   return i, summ

r, smooth_l1 = tf.while_loop(c, b, [i, smooth_l1])

print(smooth_l1)

sess = tf.Session()
print(sess.run(smooth_l1))

the out put is 6.0 (wrong).

Upvotes: 4

Views: 4815

Answers (2)

bigGuy ubuntu
bigGuy ubuntu

Reputation: 1

If you see this error: ValueError: The two structures don't have the same number of elements.

If you see it in a while_loop, that means your inputs and outputs out of the while loop have different shapes.

I solved it by making sure that I return the same structure of loop_vars from my while loop function, the condition function must also accept same loop vars.

Here is an example code


        loop_vars = [i, loss, batch_size, smaller_str_lens]
        def condition(*loop_vars):
            i = loop_vars[0]
            batch_size = loop_vars[2]
            return tf.less(i, batch_size)

        def body(*loop_vars):
            i, loss, batch_size, smaller_str_lens = loop_vars
            tf.print("The loop passed here")
           ## logic here
            i = tf.add(i, 1)

            return i, loss, batch_size, smaller_str_lens
            
        loss = tf.while_loop(condition, compare_strings, loop_vars)[1]

The body func must return loop vars, and the condition func must accept loop vars

Upvotes: 0

mrry
mrry

Reputation: 126184

The tf.while_loop() function requires that the following four lists have the same length, and the same type for each element:

  • The list of arguments to the cond function (c in this case).
  • The list of arguments to the body function (b in this case).
  • The list of return values from the body function.
  • The list of loop_vars representing the loop variables.

Therefore, if your loop body has two outputs, you must add a corresponding argument to b and c, and a corresponding element to loop_vars:

c = lambda i, _: tf.less(i, 30)

def b(i, _):
  i = tf.add(i, 1)
  cond = tf.cond(tf.greater(data[i-1], tf.constant(5.)),
                 lambda: tf.constant(1.0),
                 lambda: tf.constant([0.0]))

  # NOTE: This line fails with a shape error, because the output of `cond` has
  # a rank of either 0 or 1, but axis may be as large as 28.
  output = tf.expand_dims(cond, axis=i-1)
  return i, output

# NOTE: Use a shapeless `tf.placeholder_with_default()` because the shape
# of the output will vary from one iteration to the next.
r, out = tf.while_loop(c, b, [i, tf.placeholder_with_default(0., None)])

As noted in the comments, the body of the loop (specifically the call to tf.expand_dims()) seems to be incorrect and this program won't work as-is, but hopefully this is enough to get you started.

Upvotes: 6

Related Questions