Reputation: 331
I have created an RNN with the Keras functional API in TensorFlow 2.0 where the following piece of code workes
sum_input = keras.Input(shape=(UNIT_SIZE, 256,), name='sum')
x = tf.unstack(sum_input,axis=2, num=256)
t_sum = x[0]
for i in range(len(x) - 1):
t_sum = keras.layers.Add()([t_sum, x[i+1]])
sum_m = keras.Model(inputs=sum_input, outputs=t_sum, name='sum_model')
I then had to changed to Tensorflow 1.13 which gives me the following error
ValueError: Output tensors to a Model must be the output of a TensorFlow `Layer` (thus holding past layer metadata). Found: Tensor("add_254/add:0", shape=(?, 40), dtype=float32)
I don't understand why the output tensor is not from a Tensorflow layer, since t_sum is the output from keras.layers.Add.
I have tried to wrap parts of the code into keras.layers.Lambda as suggested in ValueError: Output tensors to a Model must be the output of a TensorFlow Layer , but it doesn't seem to work for me.
Upvotes: 4
Views: 3877
Reputation: 8585
The problem is not with Add()
layer but with tf.unstack()
- it is not an instance of keras.layers.Layer()
. You can just wrap it up as custom layer:
import tensorflow as tf
class Unstack(tf.keras.layers.Layer):
def __init__(self):
super(Unstack, self).__init__()
def call(self, inputs, num=256):
return tf.unstack(inputs, axis=2, num=num)
x = Unstack()(sum_input)
or, instead of subclassing, you can do it using Lambda
layer:
x = tf.keras.layers.Lambda(lambda t: tf.unstack(t, axis=2, num=256))(sum_input)
Upvotes: 4