hirschme
hirschme

Reputation: 894

Keras custom layer with no different output_shape

I am trying to implement a layer in Keras that adds weights element-wise with each input. The input, weights and output have therefore exactly the same shape. Nevertheless I am struggling to implement this and I havent found any example of a custom layer that does not change the input shape.

from keras.engine.topology import Layer import keras.backend as K

class SumationLayer(Layer):

def __init__(self, **kwargs):
    self.output_dim = K.placeholder(None)
    super(SumationLayer, self).__init__(**kwargs)

def build(self, input_shape):
    # Create a trainable weight variable for this layer.
    self.kernel = self.add_weight(name='kernel', 
                                  shape=(input_shape[1], self.output_dim),
                                  initializer='uniform',
                                  trainable=True)
    super(SumationLayer, self).build(input_shape)  # Be sure to call this somewhere!
    self.output_dim = (input_shape[0], self.output_dim)
def call(self, x):
    return x + self.kernel

def compute_output_shape(self, input_shape):
    return (input_shape[0], self.output_dim)

this outputs the following error:

TypeError: Value passed to parameter 'shape' has DataType float32 not in list of allowed values: int32, int64

If I implement the layer just like the Keras example, then I have to input the output shape when initializing, and this produces undesired behavior (flattens the output through fully connecting the inputs).

Upvotes: 2

Views: 1099

Answers (1)

Moondra
Moondra

Reputation: 4511

Playing around with code I got it work like this: However this only works for a 2-dimensional tensor. If you need a 3-dimenesional tensor, you would need to include input_shape[3] as well.

from keras.layers import Layer, Input
from keras import backend as K
from keras import Model
import tensorflow as tf

class SumationLayer(Layer):

    def __init__(self, **kwargs):
        super(SumationLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        # Create a trainable weight variable for this layer.
        self.kernel = self.add_weight(name='kernel', 
                                      shape=(input_shape[1], input_shape[2]),
                                      initializer='uniform',
                                      trainable=True)
        super(SumationLayer, self).build(input_shape)  # Be sure to call this somewhere!
    def call(self, x):
        return x + self.kernel

    def compute_output_shape(self, input_shape):
        return (input_shape[0], input_shape[1], input_shape[2])



input = Input(shape = (10,10))
output = SumationLayer()(input)
model = Model(inputs = [input], outputs = [output])
model.summary()

Upvotes: 1

Related Questions