Reputation: 727
I am new to tf, not sure my terminology is appropriate in title. Basically I saw an example code like following that transpose a tensor and multiply it to a weight matrix.
embed_dim = xl.shape[-1]
w=tf.Variable(tf.random.truncated_normal(shape=(embed_dim,), stddev=0.01)) #(221)
x1_transpose = tf.reshape(xl, [-1, 1, embed_dim]) #(None, 1, 221)
x_lw = tf.tensordot(x1_transpose, w, axes=1) #(None, 1)
I am wondering if I can use tf.linalg.matmul
function with something like tf.linalg.matmul(xl, w, transpose_a=True, transpose_b=False)
to achieve the same. I feel here I need to convert or create a w
of shape TensorShape([221, None])
, but not sure how
xl.shape
>> TensorShape([None, 221])
w=tf.Variable(tf.random.truncated_normal(shape=(embed_dim,), stddev=0.01))
>> TensorShape([221])
Upvotes: 1
Views: 359
Reputation: 26698
If you have something like this:
import tensorflow as tf
tf.random.set_seed(123)
xl = tf.keras.layers.Input((221,))
embed_dim = xl.shape[-1]
w=tf.Variable(tf.random.truncated_normal(shape=(embed_dim,), stddev=0.01)) #(221)
x1_transpose = tf.reshape(xl, [-1, 1, embed_dim])
x_lw = tf.tensordot(x1_transpose, w, axes=1)
model = tf.keras.Model(xl, x_lw)
example = tf.random.normal((2, 221))
print(model(example))
tf.Tensor(
[[-0.0661035 ]
[ 0.15439653]], shape=(2, 1), dtype=float32)
Then the equivalent of that using tf.linalg.matmul
would be something like this:
import tensorflow as tf
tf.random.set_seed(123)
xl = tf.keras.layers.Input((221,))
embed_dim = xl.shape[-1]
w=tf.Variable(tf.random.truncated_normal(shape=(embed_dim,), stddev=0.01)) #(221)
xl_expanded = tf.expand_dims(xl, axis=1)
w = tf.expand_dims(w, axis=1)
x_lw = tf.squeeze(tf.linalg.matmul(xl_expanded, w, transpose_a=False, transpose_b=False), axis=1)
model = tf.keras.Model(xl, x_lw)
example = tf.random.normal((2, 221))
print(model(example)
tf.Tensor(
[[-0.0661035]
[ 0.1543966]], shape=(2, 1), dtype=float32)
Interestingly, there seems to be a small rounding difference between the two methods. Using xl_expanded @ w
also yields the same results as tf.linalg.matmul
. In general, you should be able to use either method for your use case:
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3], dtype=tf.float32)
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2], dtype=tf.float32)
option1 = tf.tensordot(a, b, axes=1)
option2 = tf.linalg.matmul(a, b)
print(option1)
print(option2)
tf.Tensor(
[[ 58. 64.]
[139. 154.]], shape=(2, 2), dtype=float32)
tf.Tensor(
[[ 58. 64.]
[139. 154.]], shape=(2, 2), dtype=float32)
Upvotes: 1