user8488890
user8488890

Reputation:

Train a model using lstm and keras

I have an input data like this:

x_train = [
    [0,0,0,1,-1,-1,1,0,1,0,...,0,1,-1],
    [-1,0,0,-1,-1,0,1,1,1,...,-1,-1,0]
    ...
    [1,0,0,1,1,0,-1,-1,-1,...,-1,-1,0]
]
y_train = [1,1,1,0,-1,-1,-1,0,1...,0,1]

it is an array of arryas which each array has size of 83. and the y_train is the lable for each of these arrays. so len(x_train) is equal to the len(y_train). i used from keras and theano backend for training on such data with this code:

def train(x, y, x_test, y_test):
    x_train = np.array(x)
    y_train = np.array(y)
    print x_train.shape
    print y_train.shape
    model = Sequential()
    model.add(Embedding(x_train.shape[0], output_dim=256))
    model.add(LSTM(128))
    model.add(Dropout(0.5))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy',
            optimizer='rmsprop',
            metrics=['accuracy'])
    model.fit(x_train, y_train, batch_size=16)
    score = model.evaluate(x_test, y_test, batch_size=16)
    print score

but my network did not fit and the result is:

Epoch 1/10
1618/1618 [==============================] - 4s - loss: -1.6630 - acc: 0.0043     
Epoch 2/10
1618/1618 [==============================] - 4s - loss: -2.5033 - acc: 0.0012         
Epoch 3/10
1618/1618 [==============================] - 4s - loss: -2.6150 - acc: 0.0012         
Epoch 4/10
1618/1618 [==============================] - 4s - loss: -2.6297 - acc: 0.0012         
Epoch 5/10
1618/1618 [==============================] - 4s - loss: -2.5731 - acc: 0.0012            
Epoch 6/10
1618/1618 [==============================] - 4s - loss: -2.6042 - acc: 0.0012         
Epoch 7/10
1618/1618 [==============================] - 4s - loss: -2.6257 - acc: 0.0012          
Epoch 8/10
1618/1618 [==============================] - 4s - loss: -2.6303 - acc: 0.0012         
Epoch 9/10
1618/1618 [==============================] - 4s - loss: -2.6296 - acc: 0.0012         
Epoch 10/10
1618/1618 [==============================] - 4s - loss: -2.6298 - acc: 0.0012         
283/283 [==============================] - 0s     
[-2.6199024279631482, 0.26501766742328875]

i want to do this training and get a good result.

Upvotes: 2

Views: 211

Answers (1)

DJK
DJK

Reputation: 9274

A negative loss should throw a HUGE red flag. Loss should always be a positive number, approaching zero. You stated your y's are

y_train = [1,1,1,0,-1,-1,-1,0,1...,0,1]

Since your loss is binary_crossentropy I have to assume the objective is a 2 class, classification problem. When you look at the y values you have -1,0, and 1. Which suggests 3 classes. Big problem, you should only have 1's and 0's. You need to correct your data. I know nothing about the data so I cannot help condense it to two classes. The -1's are the reason for the negative loss. The sigmoid activation is based on a CDF ranging from 0-1, so your classes must fit on either end of this function.


EDIT

from the description in the comments below, I would suggest a 3 class structure. Below is a sample of output data converted to categorical values

from keras.utils import to_categorical

y_train = np.random.randint(-1,2,(10))

print(y_train)

[-1  0 -1 -1 -1  0 -1  1  1  0]

print(to_categorical(y_train,num_classes=3))

[[ 0.  0.  1.]
 [ 1.  0.  0.]
 [ 0.  0.  1.]
 [ 0.  0.  1.]
 [ 0.  0.  1.]
 [ 1.  0.  0.]
 [ 0.  0.  1.]
 [ 0.  1.  0.]
 [ 0.  1.  0.]
 [ 1.  0.  0.]]

now each possible output is stored in a separate column. You can see how -1,0 and 1 are assigned a binary value i.e. -1 = [0. 0. 1.], 0 = [1. 0. 0.], and 1 = [0. 1. 0.]

Now you just need to update the loss function, the number of output nodes, and the activation on the output layer

model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy',
        optimizer='rmsprop',
        metrics=['accuracy'])

Upvotes: 1

Related Questions