Reputation: 55
I'm trying to use the output of a tf.keras.layers.Lambda function as the last layer in a tf.keras model, but tf is interpreting the lambda layer's output as a Tensor(as opposed to a Layer) object.
The error is: "ValueError: Output tensors to a Model must be the output of a TensorFlow Layer
(thus holding past layer metadata). Found: Tensor("Discriminator/mullayer/mul:0", shape=(2, 2), dtype=float32)"
and the code is attached below
from tensorflow.contrib.keras import layers, models
#lots of stuff up here, all working fine...
logits = layers.Dense(1, name=name+'fc')(x)# x works fine
mullayer = layers.Lambda(lambda x: x * self.alphaVal,name = "mullayer")
test = tf.constant([1.0],shape = (2,2))
testOut = mullayer(test)
outputs = [logits, testOut]
self.disc = models.Model(inputs=inp, outputs=outputs)
'self.alphaVal' is not a keras variable, just a float, which I suspect may be part of the problem. If so, what is the equivalent of keras' backend K in tf.keras?
Thanks
Upvotes: 1
Views: 6618
Reputation: 86600
test
is not coming from anywhere considered a Keras layer.
If test
intended to be a model's input, it must be:
test = Input(tensor=tf.constant([1.0], shape=(2,2))
#there may be some implications with shape, batch size and other stuff....
As a model with two inputs, you should remember to add it when defining the Model
.
If you want a constant value to be used without it being an input, you must not pass it as the "input" of a layer. You just refer to it from inside the layer, or you create it inside the layer.
If you just want to test your Lambda layer:
inp = Input((2,2))
out = mullayer(inp)
testModel = Model(inp,out)
testModel.predict(np.ones((1,2,2)))
Upvotes: 2