Reputation: 847
Does tf.map_fn support taking more than one tensors as is supported by python's native map function (example provided below)?
a = [1,2,3,4]
b = [17,12,11,10]
print(map(lambda x,y:x+y, a,b)) # ==> [18, 14, 14, 14]
Upvotes: 18
Views: 13881
Reputation: 1096
you can combine approaches described in this page to pass a number of tensors and arguments to be considered when calling to your function, for example -
import tensorflow as tf
cnn = tf.keras.layers.Conv1D(name, filters=64, kernel_size=4, padding='same')
pool = tf.keras.layers.GlobalAveragePooling1D()
def stack_inputs(inp1, inp2, axis=1):
return tf.stack([inp1, inp2], axis)
def attention_op(q, p, cnn):
q_encoded = pool(cnn(q))
q_v_att = pool(tf.keras.layers.Attention()([cnn(q), cnn(p)]))
return tf.keras.layers.Concatenate()([q_encoded, q_v_att])
cnn1 = cnn('cnn_layer_1')
stacked_inputs = stack_inputs(inp1, inp2)
map_result = tf.keras.backend.map_fn(lambda x: attention_op(x[0], x[1], cnn1), stacked_inputs, dtype=tf.float32)
Upvotes: 0
Reputation: 339
If Tensors are of the same shape (most cases), stack the tensors in the first dimension and slide them inside the map function:
import tensorflow as tf
# declare variables
a = tf.constant([1, 2, 3, 4])
b = tf.constant([17, 12, 11, 10])
# NOTE: use stack because map_tf only takes one input tensor
ab = tf.stack([a, b], 1)
def map_operation(value_ab):
# iterates each value_ab
value_a = value_ab[0]
value_b = value_ab[1]
return value_a+value_b
# print(map(lambda x,y:x+y, a,b)) # ==> [18, 14, 14, 14]
# iterates each value_ab at map_operation()
map_result = tf.map_fn(map_operation, ab, dtype=tf.int32)
with tf.Session() as sess:
tf.initialize_all_variables().run()
print(sess.run(map_result)) # [18 14 14 14]
reference LINK
Upvotes: 5
Reputation: 847
As on today, I see that map_fn is enhanced to take two tensors as the documentation says that - "elems: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be applied to fn." The example (though given in numpy form) also shows that it can take two tensors. I'm copying it here.
elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))
alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64)
# alternate == [-1, 2, -3]
Upvotes: 18
Reputation: 334
Not natively, but here's a quick function that achieves it:
def map(fn, arrays, dtype=tf.float32):
# assumes all arrays have same leading dim
indices = tf.range(tf.shape(arrays[0])[0])
out = tf.map_fn(lambda ii: fn(*[array[ii] for array in arrays]), indices, dtype=dtype)
return out
# example: batch affine tranformation
x = tf.random_normal([4,5,6])
M = tf.random_normal([4,6,10])
b = tf.random_normal([4,10])
f = lambda x0,M0,b0: tf.matmul(x0,M0) + b0
batch_y = map(f, [x,M,b])
Upvotes: 5
Reputation: 8536
The source code shows that this function takes only one elems tensor:
def map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True,
swap_memory=False, name=None):
I don't see any * and ** parameters.
Upvotes: 1