Reputation: 150
I'm trying to create a char cnn using Keras. That type of cnn requires you to use Convolutional1D
layer. But all the ways I try to add them to my model, it gives me errors at creation stage. Here is my code:
def char_cnn(n_vocab, max_len, n_classes):
conv_layers = [[256, 7, 3],
[256, 7, 3],
[256, 3, None],
[256, 3, None],
[256, 3, None],
[256, 3, 3]]
fully_layers = [1024, 1024]
th = 1e-6
embedding_size = 128
inputs = Input(shape=(max_len,), name='sent_input', dtype='int64')
# Embedding layer
x = Embedding(n_vocab, embedding_size, input_length=max_len)(inputs)
# Convolution layers
for cl in conv_layers:
x = Convolution1D(cl[0], cl[1])(x)
x = ThresholdedReLU(th)(x)
if not cl[2] is None:
x = MaxPooling1D(cl[2])(x)
x = Flatten()(x)
#Fully connected layers
for fl in fully_layers:
x = Dense(fl)(x)
x = ThresholdedReLU(th)(x)
x = Dropout(0.5)(x)
predictions = Dense(n_classes, activation='softmax')(x)
model = Model(input=inputs, output=predictions)
model.compile(optimizer='adam', loss='categorical_crossentropy')
return model
And here is the error I receive when I try to call char_cnn
function
InvalidArgumentError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, require_shape_fn)
685 graph_def_version, node_def_str, input_shapes, input_tensors,
--> 686 input_tensors_as_shapes, status)
687 except errors.InvalidArgumentError as err:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg)
515 compat.as_text(c_api.TF_Message(self.status.status)),
--> 516 c_api.TF_GetCode(self.status.status))
517 # Delete the underlying status object from memory otherwise it stays alive
InvalidArgumentError: Negative dimension size caused by subtracting 3 from 1 for 'conv1d_26/convolution/Conv2D' (op: 'Conv2D') with input shapes: [?,1,1,256], [1,3,256,256].
How to fix it?
Upvotes: 3
Views: 966
Reputation: 53758
Your downsampling is too aggressive and the key argument here is max_len
: when it's too small, the sequence becomes too short to perform either a convolution or a max-pooling. You set pool_size=3
, hence it shrinks the sequence by a factor of 3
after each pooling (see the example below). I suggest you try pool_size=2
.
The minimal max_len
that this network can handle is max_len=123
. In this case x
shape is transformed in the following way (according to conv_layers
):
(?, 123, 128)
(?, 39, 256)
(?, 11, 256)
(?, 9, 256)
(?, 7, 256)
(?, 5, 256)
Setting a smaller value, like max_len=120
causes x.shape=(?, 4, 256)
before the last layer and this can't be performed.
Upvotes: 2