Reputation: 26983
It seems pretty silly to me that Tensorflow has decided to make a constant_initializer
op that takes only scalar values. It would make a lot of sense to be able to initialize variables with constant tensors:
tf.get_variable('some_var', shape = [4,3], initializer=tf.constant_initializer(tf.constant([[0,0,0], [0,0,1],[0,1,0],[1,0,0]])))
Is using placeholders and feed_dict
the only way to initialize tensor variables to custom values? This forces one to have declarations and data initializations in separated places, which is a hassle
Upvotes: 2
Views: 4983
Reputation: 19
Remove the "tf.constant", do like below, works in TF 1.13
tf.get_variable('some_var', shape = [4,3], initializer=tf.constant_initializer([[0,0,0], [0,0,1],[0,1,0],[1,0,0]]))
Upvotes: 1
Reputation: 126184
The tf.constant_initializer()
function might not accept a tf.Tensor
as an argument, but tf.get_variable()
does accept a tf.Tensor
as its initializer
argument. This means you can write:
v = tf.get_variable('some_var', initializer=tf.constant([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0]]))
...which requires even fewer characters!
The reason tf.constant_initializer()
doesn't take an arbitrary tensor is that it is designed to initialize variables of many different shapes with the same constant value for each element. For example, a statement like:
v = tf.get_variable('some_var', shape=[15, 37], initializer=tf.constant_initializer(
tf.constant([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0]])))
...wouldn't make much sense. Arguably we could make tf.constant_initializer()
accept a scalar tf.Tensor
, and then it would have semantics similar to tf.fill()
, but we haven't had any demand for that yet. Feel free to raise a GitHub issue if it would be useful though!
Upvotes: 5