Reputation: 13
I have data of dimension 24*64*64*10
(excluding the batch size).
I want to split the input into 24 inputs of dimension 64*64*10
, perform Conv2D
on each of them and then concatenate them to get the 4D data again for further processing.
Any help regarding the implementation would be helpful. I am working with Keras.
Edit: I tried to the following code to perform the 2D convolution
num_ch= 24
input= Input(shape=(64,64,10,num_ch))
print(input.shape)
branch_out= []
for i in range(num_ch):
out= Lambda(lambda x: x[:,:,:,:,i] )(input)
print(out.shape)
out= Conv2D(10, kernel_size=(3,3),strides= (1,1), padding='same', data_format= 'channels_last')(input)
branch_out.append(out)
I got the following error:
(?, 64, 64, 10, 24)
(?, 64, 64, 10)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-83-51977f4edbba> in <module>
7 out= Lambda(lambda x: x[:,:,:,:,i] )(input)
8 print(out.shape)
----> 9 out= Conv2D(10, kernel_size=(3,3),strides= (1,1), padding='same', data_format= 'channels_last')(input)
10 branch_out.append(out)
~/anaconda3/lib/python3.7/site-packages/keras/engine/base_layer.py in __call__(self, inputs, **kwargs)
412 # Raise exceptions in case the input is not compatible
413 # with the input_spec specified in the layer constructor.
--> 414 self.assert_input_compatibility(inputs)
415
416 # Collect input shapes to build layer.
~/anaconda3/lib/python3.7/site-packages/keras/engine/base_layer.py in assert_input_compatibility(self, inputs)
309 self.name + ': expected ndim=' +
310 str(spec.ndim) + ', found ndim=' +
--> 311 str(K.ndim(x)))
312 if spec.max_ndim is not None:
313 ndim = K.ndim(x)
ValueError: Input 0 is incompatible with layer conv2d_25: expected ndim=4, found ndim=5
Upvotes: 0
Views: 591
Reputation: 289
Too late to answer but for those who has the same question...
I think you can just pass it to the Conv layer (maybe I'm wrong!). The code below is an example from keras: link
>>> # With extended batch shape [4, 7]:
>>> input_shape = (4, 7, 28, 28, 3)
>>> x = tf.random.normal(input_shape)
>>> y = tf.keras.layers.Conv2D(
... 2, 3, activation='relu', input_shape=input_shape[2:])(x)
>>> print(y.shape)
(4, 7, 26, 26, 2)
Or another way is to use TimeDistributed
layer. look at this link:
model = Sequential()
model.add(TimeDistributed(Conv2D(5, (3,3), padding='same'), input_shape=(10, 100, 100, 3)))
model.summary()
model summary:
Layer (type) Output Shape Param #
=================================================================
time_distributed_2 (TimeDist (None, 10, 100, 100, 5) 140
=================================================================
Total params: 140
Trainable params: 140
Non-trainable params: 0
_________________________________________________________________
Upvotes: 0
Reputation: 311
You have a typo in this line:
out= Conv2D(10, kernel_size=(3,3),strides= (1,1), padding='same', data_format= 'channels_last')(input)
Change it to:
out= Conv2D(10, kernel_size=(3,3),strides= (1,1), padding='same', data_format= 'channels_last')(out)
Upvotes: 2