syeh_106
syeh_106

Reputation: 1177

How do I implement a constant neuron in Keras?

I have the following neural network in Python/Keras:

input_img = Input(shape=(784,))

encoded = Dense(1000, activation='relu')(input_img)  # L1
encoded = Dense(500, activation='relu')(encoded)     # L2
encoded = Dense(250, activation='relu')(encoded)     # L3
encoded = Dense(2, activation='relu')(encoded)       # L4

decoded = Dense(20, activation='relu')(encoded)      # L5
decoded = Dense(400, activation='relu')(decoded)     # L6
decoded = Dense(100, activation='relu')(decoded)     # L7
decoded = Dense(10, activation='softmax')(decoded)   # L8

mymodel = Model(input_img, decoded)

What I'd like to do is to have one neuron in each of layers 4~7 to be a constant 1 (to implement the bias term), i.e. it has no input, has a fixed value of 1, and is fully connected to the next layer. Is there a simple way to do this? Thanks a lot!

Upvotes: 3

Views: 533

Answers (1)

Jonas Adler
Jonas Adler

Reputation: 10789

You could create constant input tensors:

constant_values = np.ones(shape)
constant = Input(tensor=K.variable(constant_values))

With that said, your use case (bias) sounds like you should simply use use_bias=True which is the default, as noted by @gionni.

Upvotes: 3

Related Questions