Reputation: 2067
Why does this not work:
pl_input = tf.sparse_placeholder('float32',shape=[None,30])
W = tf.Variable(tf.random_normal(shape=[30,1]), dtype='float32')
layer1a = tf.sparse_matmul(pl_input, weights, a_is_sparse=True, b_is_sparse=False)
The error message is
TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. Contents: SparseTensor(indices=Tensor("Placeholder_11:0", shape=(?, ?), dtype=int64), values=Tensor("Placeholder_10:0", shape=(?,), dtype=float32), dense_shape=Tensor("Placeholder_9:0", shape=(?,), dtype=int64)). Consider casting elements to a supported type.
I'm hoping to create a SparseTensorValue that I retrieve batches from, then feed a batch into the pl_input.
Upvotes: 1
Views: 1120
Reputation: 24641
Use tf.sparse_tensor_dense_matmul
in place of tf.sparse_matmul
; look at the documentation for an alternative using tf.nn.embedding_lookup_sparse
.
SparseTensors
The problem is not specific to sparse_placeholder
, but due to a confusion in tensorflow's terminology.
You have sparse matrices. And then you have SparseTensor
. Both are related but different concept.
SparseTensor
is a structure that indexes its values and can represent sparse matrices or tensors efficiently.0
. In tensorflow's documentation, it often does not refer to a SparseTensor
but to a plain old Tensor
filled mostly with 0
s.It is therefore important to look at the expected type of a function's argument to figure out.
So for example, in the documentation of tf.matmul
, operands need to be plain Tensor
s and not SparseTensor
s, independently of the value of the xxx_is_sparse
flags, which explains your error. When these flags are True
, what tf.sparse_matmul
actually expects really is a (dense) Tensor
. In other words, these flags serve some optimization purposes and not input type constraints. (Those optimizations seem to be useful only for rather larger matrices by the way).
Upvotes: 3