Waleed
Waleed

Reputation: 189

Pass additional parameter in call function of custom keras layer

I created a custom keras layer with the purpose of manually changing activations of previous layer during inference. Following is basic layer that simply multiplies the activations with a number.

import numpy as np
from keras import backend as K
from keras.layers import Layer
import tensorflow as tf

class myLayer(Layer):

    def __init__(self, n=None, **kwargs):
        self.n = n
        super(myLayer, self).__init__(**kwargs)

    def build(self, input_shape):

        self.output_dim = input_shape[1]
        super(myLayer, self).build(input_shape)

    def call(self, inputs):

        changed = tf.multiply(inputs, self.n)

        forTest  = changed
        forTrain = inputs

        return K.in_train_phase(forTrain, forTest)

    def compute_output_shape(self, input_shape):
        return (input_shape[0], self.output_dim)

It works fine when I use it like this with IRIS dataset

model = Sequential()
model.add(Dense(units, input_shape=(5,)))
model.add(Activation('relu'))
model.add(myLayer(n=3))
model.add(Dense(units))
model.add(Activation('relu'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()

However now I want to move 'n' from init to the call function so I can apply different values of n after training to evaluate model. The idea is to have a placeholder inplace of n which can be initialzed with some value before calling the evaluate function on it. I am not sure how to achieve this. What would the correct approach for this? Thanks

Upvotes: 3

Views: 7228

Answers (1)

Daniel Möller
Daniel Möller

Reputation: 86600

You should work the same way the Concatenate layer does.

These layers taking multiple inputs rely on the inputs (and the input shapes) being passed in a list.

See the verification part in build, call and comput_output_shape:

def call(self,inputs):
    if not isinstance(inputs, list):
        raise ValueError('This layer should be called on a list of inputs.')
    
    mainInput = inputs[0]
    nInput = inputs[1]

    changed = tf.multiply(mainInput,nInput)
    #I suggest using an equivalent function in K instead of tf here, if you ever want to test theano or another backend later. 
    #if n is a scalar, then just "changed=nInput * mainInput" is ok

    #....the rest of the code....

Then you call this layer passing a list to it. But for that, I strongly recommend you move away from Sequential models. They're pure limitation.

from keras.models import Model

inputTensor = Input((5,)) # the original input (from your input_shape)

#this is just a suggestion, to have n as a manually created var
#but you can figure out your own ways of calculating n later
nInput = Input((1,))
    #old answer: nInput = Input(tensor=K.variable([n]))

#creating the graph
out = Dense(units, input_shape=(5,),activation='relu')(inputTensor)

#your layer here uses the output of the dense layer and the nInput
out = myLayer()([out,nInput])
    #here you will have to handle n with the same number of samples as x. 
    #You can use `inputs[1][0,0]` inside the layer

out = Dense(units,activation='relu')(out)
out = Dense(3,activation='softmax')(out)

#create the model with two inputs and one output:
model = Model([inputTensor,nInput], out)
    #nInput is now a part of the model's inputs

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])

Using the old answer, with Input(tensor=...), the model will not demand, as usually would happen, that you pass 2 inputs to the fit and predict methods.

But using the new option, with Input(shape=...) it will demand two inputs, so:

nArray = np.full((X_train.shape[0],1),n)
model.fit([X_train,nArray],Y_train,....)

Unfortunately, I coulnd't make it work with n having only one element. It must have exactly the same number of samples as (this is a keras limitation).

Upvotes: 3

Related Questions