cnapun
cnapun

Reputation: 93

LSTM won't overfit training data

I have been trying to use an LSTM for regression in TensorFlow, but it doesn't fit the data. I have successfully fit the same data in Keras (with the same size network). My code for trying to overfit a sine wave is below:

import tensorflow as tf
import numpy as np

yt = np.cos(np.linspace(0, 2*np.pi, 256))
xt = np.array([yt[i-50:i] for i in range(50, len(yt))])[...,None]
yt = yt[-xt.shape[0]:]

g = tf.Graph()
with g.as_default():
    x = tf.constant(xt, dtype=tf.float32)
    y = tf.constant(yt, dtype=tf.float32)

    lstm = tf.nn.rnn_cell.BasicLSTMCell(32)
    outputs, state = tf.nn.dynamic_rnn(lstm, x, dtype=tf.float32)
    pred = tf.layers.dense(outputs[:,-1], 1)
    loss = tf.reduce_mean(tf.square(pred-y))
    train_op = tf.train.AdamOptimizer().minimize(loss)
    init = tf.global_variables_initializer()

sess = tf.InteractiveSession(graph=g)
sess.run(init)

for i in range(200):
    _, l = sess.run([train_op, loss])
print(l)

This results in a MSE of 0.436067 (while Keras got to 0.0022 after 50 epochs), and the predictions range from -0.1860 to -0.1798. What am I doing wrong here?

Edit: When I change my loss function to the following, the model fits properly:

def pinball(y_true, y_pred):
    tau = np.arange(1,100).reshape(1,-1)/100
    pin = tf.reduce_mean(tf.maximum(y_true[:,None] - y_pred, 0) * tau +
                 tf.maximum(y_pred - y_true[:,None], 0) * (1 - tau))
    return pin

I also change the assignments of pred and loss to

pred = tf.layers.dense(outputs[:,-1], 99)
loss = pinball(y, pred)

This results in a decrease of loss from 0.3 to 0.003 as it trains, and seems to properly fit the data.

Upvotes: 2

Views: 583

Answers (2)

Allen Lavoie
Allen Lavoie

Reputation: 5808

Looks like a shape/broadcasting issue. Here's a working version:

import tensorflow as tf
import numpy as np

yt = np.cos(np.linspace(0, 2*np.pi, 256))
xt = np.array([yt[i-50:i] for i in range(50, len(yt))])
yt = yt[-xt.shape[0]:]

g = tf.Graph()
with g.as_default():
    x = tf.constant(xt, dtype=tf.float32)
    y = tf.constant(yt, dtype=tf.float32)

    lstm = tf.nn.rnn_cell.BasicLSTMCell(32)
    outputs, state = tf.nn.dynamic_rnn(lstm, x[None, ...], dtype=tf.float32)
    pred = tf.squeeze(tf.layers.dense(outputs, 1), axis=[0, 2])
    loss = tf.reduce_mean(tf.square(pred-y))
    train_op = tf.train.AdamOptimizer().minimize(loss)
    init = tf.global_variables_initializer()

sess = tf.InteractiveSession(graph=g)
sess.run(init)

for i in range(200):
    _, l = sess.run([train_op, loss])
print(l)

x gets a batch dimension of 1 before going into dynamic_rnn, since with time_major=False the first dimension is expected to be a batch dimension. It's important that the last dimension of the output of tf.layers.dense get squeezed off so that it doesn't broadcast with y (TensorShape([256, 1]) and TensorShape([256]) broadcast to TensorShape([256, 256])). With those fixes it converges:

5.78507e-05

Upvotes: 3

VS_FF
VS_FF

Reputation: 2363

You are not passing-on the state from one call of dynamic_rnn to next. That's the problem for sure.

Also, why take only last item of the output through the dense layer and onward?

Upvotes: 0

Related Questions