Reputation: 303
I am implementing custom average pooling layer, where each neuron computes the mean of its inputs, then multiplies the result by a learnable coefficient and adds a learnable bias term, then finally applies the activation function
from tensorflow.keras.layers import Layer
from keras import backend as K
class Average_Pooling_Layer(Layer):
def __init__(self, output_dimension, **kwargs):
super(Average_Pooling_Layer, self).__init__(**kwargs)
self.output_dimension = output_dimension
def build(self, input_shape):
self.weights = self.add_weight(name='weights2',
shape=(input_shape[0],
int(self.output_dimension[0]),
int(self.output_dimension[1]),
int(self.output_dimension[2])),
initializer='uniform',
trainable=True)
super(Average_Pooling_Layer, self).build(input_shape)
def call(self, inputs):
return K.tanh((inputs * self.weights))
def compute_output_shape(self, input_shape):
return (input_shape)
Code Usage
model = tf.keras.Sequential()
stride = 1
c1 = model.add(Conv2D(6, kernel_size=[5,5], strides=(stride,stride), padding="valid", input_shape=(32,32,1),
activation = 'tanh'))
s2_before_activation = model.add(AveragePooling2D(pool_size=(2, 2), strides=(2, 2)))
s2 = model.add(Average_Pooling_Layer(output_dimension = (14, 14, 6)))
I am getting error as "Failed to convert object of type to Tensor. Contents: (Dimension(None), 14, 14, 6). Consider casting elements to a supported type." "None" is batch size, which I am getting from previous layer.
How to solve this?
Upvotes: 2
Views: 3901
Reputation: 6176
Your error is caused by the data type. input_shape[0]
returns <class 'tensorflow.python.framework.tensor_shape.Dimension'>
instead of int
.
You can replace input_shape[0]
with tf.TensorShape(input_shape).as_list()[0]
. But your data dimension is not right and you have to adjust and modify it according to your needs.
Edit
If you get error as "can't set attribute", you should rename your weight variable instead of self.weights
. For example, change to self.weights_new
.
Upvotes: 1