Hans T
Hans T

Reputation: 143

Tensorflow - Map-Function

I have a question concerning tf's map function. I encounter weird behavior with this function. If I do as stated in manual

label_tensor # shape [150, 1]
y = tf.map_fun(lambda x: x*x, label_tensor)
#  returns y as a tensor with label_tensor x^2

however, if i want to implement my own function it doesn't seem to work. It just always passes a tensor to the specified function which is not made to handle tensors.

y = tf.map_fn(special_fun, label_tensor)

def special_fun(key):
    return int(2000 * round(float(key)/2000))

#  TypeError: float() argument must be a string or a number, not 'Tensor'

I am somehow not seeing the issue here. Also if I try something like: tmp_label_list = tf.Session().run(label_tensor) print(tmp_label_list) # prints out an evaluated list, [1, 2, 3, 3, 1, 2, 2,...] But if I then pass [special_fun(i) for i in tmp_label_list] it raises the Type-Error again, that it expected no 'Tensor'

What am I missing or doing wrong? Thanks in advance.

Upvotes: 3

Views: 5864

Answers (2)

xdurch0
xdurch0

Reputation: 10475

They key argument passed to your special_fun will be a tensor. You cannot use Python typecasting on tensors since these are just symbolic at the time the code is run, so Python has no idea what to do with them. The crash happens at float(), but the same will happen for round() as well as int() What you are looking for is likely

def special_fun(key):
    return tf.cast(2000 * tf.round(tf.cast(key, tf.float32)/2000), tf.int32)

That is, we are using Tensorflow's own functions to do the casting/rounding. Keep in mind that Tensorflow defines some overloaded operators (e.g. +, -, *) but deep down these are just calls to tf.add, tf.multiply etc. In general you cannot use Python built-in operators/functions on Tensors.

Upvotes: 1

javidcf
javidcf

Reputation: 59681

In tf.map_fn, the given function is expected to accept tensors with the same shape as the given tensor but removing the first dimension (that is, the function will receive each element as a tensor). In any case, what you are trying to do can be done directly (and more efficiently) without using tf.map_fn:

y = tf.cast(2000 * tf.round(tf.cast(key, tf.float32) / 2000), tf.int32)

tf.map_fn is generally reserved for particular cases where vectorization is not possible. However, if you wanted to use it anyway you would have to do something like this:

y = tf.map_fn(special_fun, label_tensor)

def special_fun(key):
    return tf.cast(2000 * tf.round(tf.cast(key, tf.float32) / 2000), tf.int32)

There is also tf.py_func, which lets you apply a regular Python function to a tensor (not to each of its elements in this case, but to the tensor as a whole, given as a NumPy array). This can also be useful for specific cases but you should avoid using it when possible as it is less efficient and also it cannot be serialized.

Upvotes: 3

Related Questions