Reputation: 781
I'm using tensorflow and keras to build a neural network.
I want to use a transpose convolution with keras.layers.Conv2DTranspose
(Definition in Keras documentation)
I used this tutorial and I defined my network as follow:
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Conv2DTranspose
sess = tf.Session()
batch_size = 20
model = Sequential()
model.add(Conv2DTranspose(filters = (batch_size,1,2700, 1),kernel_size = (2700,1), activation = 'relu', input_shape = (1,1,1,1)))
There is the following error:
ValueError Traceback (most recent call last)
<ipython-input-3-01a3b17fa36f> in <module>()
11 model = Sequential()
12
---> 13 model.add(Conv2DTranspose(filters = (batch_size,1,2700, 1),kernel_size = (2700,1), activation = 'relu', input_shape = (1,1,1,1)))
/usr/local/lib/python3.5/dist-packages/keras/models.py in add(self, layer)
465 # and create the node connecting the current layer
466 # to the input layer we just created.
--> 467 layer(x)
468
469 if len(layer._inbound_nodes[-1].output_tensors) != 1:
/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in __call__(self, inputs, **kwargs)
573 # Raise exceptions in case the input is not compatible
574 # with the input_spec specified in the layer constructor.
--> 575 self.assert_input_compatibility(inputs)
576
577 # Collect input shapes to build layer.
/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py in assert_input_compatibility(self, inputs)
472 self.name + ': expected ndim=' +
473 str(spec.ndim) + ', found ndim=' +
--> 474 str(K.ndim(x)))
475 if spec.max_ndim is not None:
476 ndim = K.ndim(x)
ValueError: Input 0 is incompatible with layer conv2d_transpose_1: expected ndim=4, found ndim=5
Nevertheless, my input has a dimension 4 (input_shape = (1,1,1,1)
)
How can I define the input correctly then add some layers ?
Upvotes: 0
Views: 458
Reputation: 660
From the documentation of Conv2DTranspose you can see that filters
is a positional argument and is supposed to be an integer. This integer specifies the number of filters you want in your layer.
The next parameter is kernel_size
(also positional) which specifies the shape of your filter.
I think what you were looking for is :
model.add(Conv2DTranspose(n_filt, (2700, 1), activation='relu' input_shape = (1, 1, 1,)))
where n_filt
is the number of transposed convolution filters in your layer.
Notes :
Don't provide the batch dimension in the input_shape
argument, or use batch_input_shape
instead.
Edit to clarify : If your data batch has dimensions (batch_size, dim1, dim2, dim3)
, you should pass either input_shape = (dim1, dim2, dim3, )
(Notice the comma before the last whitespace) or batch_input_shape = (batch_size, dim1, dim2, dim3)
. I recommend using input_shape
as you won't have any constraints on the batchsize you want to use.
Positional arguments cannot be specified with keywords i.e don't use filters = ...
in the function call if filters
is a positional argument.
Upvotes: 1