이도엽
이도엽

Reputation: 73

ConcatOp : Dimensions of inputs should match

I'm developing a deep learning model with tensor flow and python:

However, a error with not-matching dimension...

ConcatOp : Dimensions of inputs should match: shape[0] = [71,48] vs. shape[1] = [1200,24]

W_conv1 = weight_variable([1,conv_size,1,12])
b_conv1 = bias_variable([12])

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1)+ b_conv1)
h_pool1 = max_pool_1xn(h_conv1)

W_conv2 = weight_variable([1,conv_size,12,24])
b_conv2 = bias_variable([24])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_1xn(h_conv2)

W_conv3 = weight_variable([1,conv_size,24,48])
b_conv3 = bias_variable([48])

h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3)
h_pool3 = max_pool_1xn(h_conv3)


print(h_pool3.get_shape())
h3_rnn_input = tf.reshape(h_pool3, [-1,x_size/8,48])

num_layers = 1
lstm_size = 24
num_steps = 4

lstm_cell = tf.nn.rnn_cell.LSTMCell(lstm_size, initializer = tf.contrib.layers.xavier_initializer(uniform = False))
cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell]*num_layers)
init_state = cell.zero_state(batch_size,tf.float32)


cell_outputs = []
state = init_state
with tf.variable_scope("RNN") as scope:
for time_step in range(num_steps):
    if time_step > 0: scope.reuse_variables() 
    cell_output, state = cell(h3_rnn_input[:,time_step,:],state) ***** Error In here...

Upvotes: 6

Views: 13504

Answers (2)

you must consider the right number for h3_rnn_input[:,time_step,:] and state in order to dose not have any remaining by batch_size division.(if has any remain error will be raise)

so, about your code:

h3_rnn_input[:,time_step,:] has shape of [71,48]
state has shape of [1200,24]

if we consider batch_size for example equal to 90 then:

71/90=0.78 ===> error
1200/90=13.33 ===> error

but if we consider shapes and batch size as below, then we don't have any problem:

h3_rnn_input[:,time_step,:] has shape of [**60**,48]
state has shape of [**1200**,24]
batch_size=30

then

60/30=2 ======> ok without any error
1200/30=40 ======> ok without any error

Upvotes: 0

Kyungsu Stanley Kim
Kyungsu Stanley Kim

Reputation: 305

When you input to the rnn cell, the batch size of input tensor and state tensor should be same.

In the error message, it says h3_rnn_input[:,time_step,:] has shape of [71,48] while state has shape of [1200,24]

What you need to do is make the first dimensions(batch_size) to be same.

If the number 71 is not intended, check the Convolution part. Stride/Padding Could be matter.

Upvotes: 5

Related Questions