j35t3r
j35t3r

Reputation: 1533

InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,246,381,3] vs. shape[1] = [1,252,367,3]

This is my code snippet, how I am doing conatenate all train images (left and right and mask seperatly). In the the variables l, r tensores with the shape of [4, ?, ?, 3] are assigned.

with tf.Session() as session:
    l_train = [x.l_img for x in images][:4]
    r_train = [x.r_img for x in images][:4]
    m_train = [x.mask for x in images][:4]   
    l = tf.concat(l_train, 0)
    r = tf.concat(r_train, 0)
    m = tf.concat(m_train, 0)

    l.eval()

When using eval() I got this error?

Traceback (most recent call last):

  File "/home/test/anaconda2/envs/tensorflow/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)

  File "<ipython-input-5-f78dccf94f7f>", line 1, in <module>
l.eval()

  File "/home/test/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 606, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)

  File "/home/test/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3928, in _eval_using_default_session
return session.run(tensors, feed_dict)

  File "/home/test/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)

  File "/home/test/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 997, in _run
feed_dict_string, options, run_metadata)

  File "/home/test/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1132, in _do_run
target_list, options, run_metadata)

 File "/home/test/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)

InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,246,381,3] vs. shape[1] = [1,252,367,3]
 [[Node: concat = ConcatV2[N=4, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](Reading/reshape_t_left/_1, Reading/reshape_t_left_1/_3, Reading/reshape_t_left_2/_5, Reading/reshape_t_left_3/_7, concat/axis)]]
 [[Node: concat/_9 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_370_concat", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

How train my training set with dynamic patchsizes? Meanwhile, loop over my images and feed my CNN with one image after another.

_, summary_str, costs = sess.run([optimizer, merged_summary_op, cost_function],
                                         feed_dict={t_im0: l.eval(), t_im1: r.eval(),
                                                    t_label: m.eval()})

Upvotes: 0

Views: 3348

Answers (1)

LKM
LKM

Reputation: 2450

I'm having exactly the same issue and I think it is because the batch size is 1 on the paper of Faster R-CNN.

Upvotes: 1

Related Questions