Reputation: 901
I want two build a neural network that takes two separate matrices with same dimensions (for example grey-scale images) as input, and outputs a value between -1 and 1 (probably tanh).
I would like to build the network so that there are two seperate convolutional layers as inputs. Each one takes one matrix(or image). An then that these one are combined in a following layer. So i want it to look something like that:
My first question is can i do this in keras (or if not in tensorflow)? The second Question is? Does it make sense? Because I could also very easy composite the two matrices together, and only use one conv2d layer. So something like this:
what I want to do exactly would go too far. But can you imagine a situation where the first version would make more sense?
Upvotes: 3
Views: 2152
Reputation: 2050
You can do that in Keras and is makes sense, if the inputs are different. To do so in keras first you need a multiple input model and you have to concatenate the outputs of the convolutional layer together.
input_1= Input(shape=(x,y), name='input_1')
input_2= Input(shape=(x,y), name='input_1')
c1 = Conv2D(filter_size, kernel_size))(input_1)
p1 = MaxPooling2D(pool_size=(2, 2)(input_1)
f1 = Flatten()(p1)
c2 = Conv2D(filter_size, kernel_size))(input_2)
p2 = MaxPooling2D(pool_size=(2, 2)(c2)
f2 = Flatten()(p2)
x = concatenate([f1, f2])
x = Dense(num_classes, activation='sigmoid')(x)
model = Model(inputs=[input_1, input_2], outputs=[x])
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
Depending on you data it could also be possible to share the convolution layers, therefore you can just define dem once and reuse them. Weights are shared in this case.
conv = Conv2D(filter_size, kernel_size))
pooling = MaxPooling2D(pool_size=(2, 2)
flatten = Flatten()
f1 = flatten(pooling(conv(input_1)))
f2 = flatten(pooling(conv(input_2)))
Upvotes: 5