Reputation: 2847
Here is my code:
def conv_pooling(data, sequence_length, filter_size, embedding_size, num_filters):
filter_shape = [filter_size, embedding_size, 1, num_filters]
w = tf.Variable(tf.truncated_normal(filter_shape,stddev = 0.1),
name = "w")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name =
"b")
conv = tf.nn.conv2d(
item,
w,
strides = [1,1,1,1],
padding = "VALID",
name = "conv"
)
h = tf.nn.relu(tf.nn.bias_add(conv, b), name = "relu")
pooled = tf.nn.max_pool(
h,
ksize = [1,sequence_length - filter_size + 1, 1, 1],
strides = [1,1,1,1],
padding = "VALID",
name = "pool"
)
return pooled
init_op = tf.global_variables_initializer()
pooled_outputs = []
with tf.Session() as sess:
sess.run(init_op)
for i, filter_size in enumerate(filter_sizes):
pooled = sess.run(conv_pooling(data, sequence_length, filter_size, embedding_size, num_filters), feed_dict = {embedded_chars: item})
pooled_outputs.append(pooled)
This 'data' is a tf.Variable that use the global tf.placeholder 'embedded_chars', so don't worry about if it is working. The error happens because of w and b cannot be initialized.
I tried sess.run(tf.local_variables_initializer()) also, not work and return the same error. Does anyone know a way that I can initialized w and b here? As you see the size of w change in for loop.
Thank you!
Upvotes: 0
Views: 160
Reputation: 673
See the code below. That's why @mikkola means about creating your graph before initialization.
// create your computation graph
pooled = conv_pooling(data, sequence_length, filter_size, embedding_size, num_filters)
// initialize the variables in the graph
init_op = tf.global_variables_initializer()
pooled_outputs = []
with tf.Session() as sess:
sess.run(init_op)
for i, filter_size in enumerate(filter_sizes):
// run the graph to get your output
output = sess.run([pooled], feed_dict = {embedded_chars: item})
pooled_outputs.append(output)
Upvotes: 1