Manngo
Manngo

Reputation: 829

Multi dimensional input for LSTM in Keras

I would like to understand how an RNN, specifically an LSTM is working with multiple input dimensions using Keras and Tensorflow. I mean the input shape is (batch_size, timesteps, input_dim) where input_dim > 1.
I think the below images illustrate quite well the concept of LSTM if the input_dim = 1.
Does this mean if input_dim > 1 then x is not a single value anymore but an array? But if it's like this then the weights are also become arrays, same shape as x + the context?

LSTM structure

enter image description here

Upvotes: 16

Views: 6208

Answers (2)

Rachin
Rachin

Reputation: 1

Let's update the above answer to TensorFlow 2.

import tensorflow as tf

model = tf.keras.Sequential([tf.keras.layers.LSTM(units, input_shape=(timesteps, features))])
model.summary()

free_params = (4 * features * units) + (4 * units * units) + (4 * num_units)
print('free_params ', free_params)
print('kernel_c', lstm.kernel_c.shape)
print('bias_c', lstm.bias_c .shape)

Using this code, you could achieve the same result in TensorFlow 2.x as well.

Upvotes: 0

PSL
PSL

Reputation: 252

Keras creates a computational graph that executes the sequence in your bottom picture per feature (but for all units). That means the state value C is always a scalar, one per unit. It does not process features at once, it processes units at once, and features separately.

import keras.models as kem
import keras.layers as kel

model = kem.Sequential()
lstm = kel.LSTM(units, input_shape=(timesteps, features))
model.add(lstm)
model.summary()

free_params = (4 * features * units) + (4 * units * units) + (4 * num_units)
print('free_params ', free_params)
print('kernel_c', lstm.kernel_c.shape)
print('bias_c', lstm.bias_c .shape)

where 4 represents one for each of the f, i, c, and o internal paths in your bottom picture. The first term is the number of weights for the kernel, the second term for the recurrent kernel, and the last one for the bias, if applied. For

units = 1
timesteps = 1
features = 1

we see

Layer (type)                 Output Shape              Param #
=================================================================
lstm_1 (LSTM)                (None, 1)                 12
=================================================================
Total params: 12.0
Trainable params: 12
Non-trainable params: 0.0
_________________________________________________________________
num_params 12
kernel_c (1, 1)
bias_c (1,)

and for

units = 1
timesteps = 1
features = 2

we see

Layer (type)                 Output Shape              Param #
=================================================================
lstm_1 (LSTM)                (None, 1)                 16
=================================================================
Total params: 16.0
Trainable params: 16
Non-trainable params: 0.0
_________________________________________________________________
num_params 16
kernel_c (2, 1)
bias_c (1,)

where bias_c is a proxy for the output shape of the state C. Note that there are different implementations regarding the internal making of the unit. Details are here (http://deeplearning.net/tutorial/lstm.html) and the default implementation uses Eq.7. Hope this helps.

Upvotes: 3

Related Questions