Reputation: 447
If I do the following:
r = (x - mn) / std
where x is of shape (batchSize, 100), mn, and std are all of length (1, 100)
Are the subtraction and division done pointwise? I would expect to r to be (batchSize, 100).
I cannot examine the shapes directly because using tf.keras.batch_flatten obliberates the shapes.
For example:
x.shape
TensorShape([Dimension(None), Dimension(314), Dimension(314), Dimension(8)])
x = K.batch_flatten(x)
<tf.Tensor 'conv2d_1/activity_regularizer/Reshape_2:0' shape=(?, ?) dtype=float32>
x.shape
TensorShape([Dimension(None), Dimension(None)])
Upvotes: 0
Views: 108
Reputation: 5565
Everything concerning Keras
and Tensorflow
is Numpy
compatible as it could be. So let's have a look.
x = np.array([1,2,3,4,5])
m = np.array([1,1,1,1,1])
n = np.array([5,4,3,2,1])
std = 10
m_times_n = m * n
# [5 4 3 2 1]
x_minus_mn = x - m_times_n
# [-4 -2 0 2 4]
r = x_minus_mn / std
# [-0.4 -0.2 0. 0.2 0.4]
So they are pointwise. Or let's see what happens in Tensorflow
:
tf.enable_eager_execution()
x = tf.constant([1,2,3,4,5])
m = tf.constant([1,1,1,1,1])
n = tf.constant([5,4,3,2,1])
std = tf.constant(10)
m_times_n = m * n
# tf.Tensor([5 4 3 2 1], shape=(5,), dtype=int32)
x_minus_mn = x - m_times_n
# tf.Tensor([-4 -2 0 2 4], shape=(5,), dtype=int32)
r = x_minus_mn / std
# tf.Tensor([-0.4 -0.2 0. 0.2 0.4], shape=(5,), dtype=float64)
Pointwise as well.
Also in your post you have mentioned that you have issues with tf.keras.batch_flatten
. The resulting (?, ?)
shape is because of the way tf.keras.batch_flatten
works. Let's have a look:
# Assuming we have 5 images, with 320x320 size, and 3 channels
X = tf.ones((5, 320,320, 3))
flatten = tf.keras.backend.batch_flatten(X)
flatten.shape
# (5, 307200)
Taken from the documentation:
Turn a nD tensor into a 2D tensor with same 0th dimension.
And we are seeing the exact thing. The 0th (batch_size)
has been kept, while all other dimensions were squeezed such that the resulting tensor is 2D.
Upvotes: 1