Reputation: 6538
Imagine there is a tensor with the following dimensions (32, 20, 3)
where batch_size = 32, num_steps = 20 and features = 3. The features are taken from a .csv file that has the following format:
feat1, feat2, feat3
200, 100, 0
5.5, 200, 0.5
23.2, 1, 9.3
Each row is transformed into 3-dim vector (numpy array): [200, 100, 0]
, [5.5, 200, 0.5]
, [23.2, 1, 9.3]
.
We want to use these features in a recurrent neural network but directly feeding them into rnn won't do, we'd like to process these feature vectors first by applying linear transformation to each 3-dim vector inside the batch sample and reshape the input tensor into (32, 20, 100)
.
This is easily done in Torch for example via: nn.MapTable():add(nn.Linear(3, 100))
which is applied on the input batch tensor of size 20 x 32 x 3
(num_steps and batch_size are switched in Torch). We split it into 20 arrays each 32x3
in size
1 : DoubleTensor - size: 32x3
2 : DoubleTensor - size: 32x3
3 : DoubleTensor - size: 32x3
...
and use nn.Linear(3, 100)
to transform them into 32x100
vectors. We then pack them up back into 20 x 32 x 100
tensor. How can we implement the same operation in Tensorflow?
Upvotes: 2
Views: 1297
Reputation: 2072
Could reshape into [batchsize*num_steps, features] use a Tensorflow linear layer with 100 outputs and then reshape back would that work?
reshaped_tensor = tf.reshape(your_input, [batchsize*num_steps, features])
linear_out = tf.layers.dense(inputs=reshaped_tensor, units=100)
reshaped_back = tf.reshape(linear_out, [batchsize, num_steps, features]
Upvotes: 2