Uylenburgh
Uylenburgh

Reputation: 1337

Tensorflow: Filter must not be larger than the input

I want to perform convolution along the training sample that is of shape [n*1] and apply zero-padding too. So far, no results.

I am building a character-level CNN (idea taken from here)

My data is essentially tweets, initially each of length 140. I filter all non-alphabetical characters (replace them with empty string ''), all alphabetical characters I convert to lowercase and encode as one-hot encoder.

So I have data is n*m, where n is the number of training examples, m=140*26=3640, since each alphabetical character is encoded as one-hot vector.

Now, I am trying to perform convolution, and here is where I have a problem. Essentially: 1) I try to pad a single tweet with zeros around it. 2) Then what I want to do is perform convolution with a filter 3*3 along the tweet, which, I expect to be of 3642*3 size, where width = 3642 and height=3 after padding.

F = 3 # filter size
S = 1 # stride
P = 1 # zero-pading
MAX_DOCUMENT_LENGTH = 3640
IMAGE_WIDTH = MAX_DOCUMENT_LENGTH
IMAGE_HEIGHT = 1
N_FILTERS = 20
FILTER_SHAPE1 = F
BATCH_SIZE = 257

def conv_model(X, y):
    X = tf.cast(X, tf.float32)
    y = tf.cast(y, tf.float32)
    # reshape X to 4d tensor with 2nd and 3rd dimensions being image width and height
    # final dimension being the number of color channels
    X = tf.reshape(X, [-1, IMAGE_WIDTH, IMAGE_HEIGHT, 1])
   # first conv layer will compute N_FILTERS features for each FxF patch
    with tf.variable_scope('conv_layer1'):
            h_conv1 = tf.contrib.layers.conv2d(inputs=X,num_outputs=N_FILTERS, 
                                  kernel_size=[3,3], padding='VALID')

I get the error: ValueError: Filter must not be larger than the input: Filter: (3, 3) Input: (3640, 1)

Why is zero-padding not applied? At least, the result of its application does not work...

So I change the filter size to [3,1] and I call:

h_conv1 = tf.contrib.layers.conv2d(inputs=X, num_outputs=N_FILTERS, kernel_size=[3,1], padding='VALID')

And I don't get the error.

Could someone please explain what is happening?

Also, why do we need to reshape the input as X = tf.reshape(X, [-1, IMAGE_WIDTH, IMAGE_HEIGHT, 1])?

Upvotes: 0

Views: 1943

Answers (1)

Dmytro Danevskyi
Dmytro Danevskyi

Reputation: 3159

Why is zero-padding not applied?

Use padding = 'SAME' in conv2d for zero-padding.

Could someone please explain what is happening?

You can't use the 3x3 filter in the case of 'flat' image. To use the 3x3 filter, input should have dimensions bigger than 3 on both width and height.

Also, why do we need to reshape the input as X = tf.reshape(X, [-1, IMAGE_WIDTH, IMAGE_HEIGHT, 1])?

A Single image has a shape of [width, height, number_of_channels]. Extra dimension stands for minibatch size. -1 just preserves total size.

Upvotes: 2

Related Questions