user1410665
user1410665

Reputation: 759

ValueError: Shape of a new variable (local1/weights) must be fully defined, but instead was (?, 1000)

I'm new in Tensorflow. After convolution the shape of my layer is shape=(5, 5, 5, 5), dtype=float32 but when I'm applying deconvolution, getting shape like shape=(?, 25, 25, 640), dtype=float32. That means batch size is not showing properly (? sign) after deconvolution. For deconvolution, I used this Deconvolution function.

Error ValueError: Shape of a new variable (local1/weights) must be fully defined, but instead was (?, 1000).

I already tried example1but didn't work well

Upvotes: 1

Views: 287

Answers (3)

user1410665
user1410665

Reputation: 759

The issue has been solved and the previous transpose/deconvolution code is running nicely. Just we have to make some minor changes. We have to define batch size in the output shape.

Upvotes: 0

Taras Khalymon
Taras Khalymon

Reputation: 614

From the description of used Deconvolution function

  #Now output.get_shape() is equal (?,?,?,?) which can become a problem in the 
  #next layers. This can be repaired by reshaping the tensor to its shape:
  output = tf.reshape(output, output_shape)
  #now the shape is back to (?, H, W, C) or (?, C, H, W)

Batch size shouldn't be displayed, cause it designed to be unknown. It is made so to preserve an ability to process batches with different size (first dimension size). So that you can run model on batches of different size, for example, train on 5 and predict 20 images in one run.

And fully agree with T. Kelher:

I recommend to use this function instead:

tf.nn.conv2d_transpose()

Upvotes: 0

T. Kelher
T. Kelher

Reputation: 1186

the difference is that the example you send is a tensor getting the wrong data fed to it. Your problem is that the weights of a deconvolutional filter is not fully defined. The weights are not dependent on the batch size, and need to be of fixed size, hence the error. I know you understood the error, just want to make clear that the problem you have and the example has, is quite different.

I recommend to use this function instead:

 tf.nn.conv2d_transpose()

it is defined like you'd do with a normal convolutional layer. It's default in TensorFlow, and I wonder why you didn't use it to start with?

Upvotes: 1

Related Questions