user42388
user42388

Reputation: 183

Tensor math with tensorflow backend

I was trying add custom metrics while training my LSTM using keras. See code below:

from keras.models import Sequential
from keras.layers import Dense, LSTM, Masking, Dropout
from keras.optimizers import SGD, Adam, RMSprop
import keras.backend as K
import numpy as np

_Xtrain = np.random.rand(1000,21,47)
_ytrain = np.random.randint(2, size=1000)

_Xtest = np.random.rand(200,21,47)
_ytest = np.random.randint(1, size=200)

def t1(y_pred, y_true):
    return K.tf.count_nonzero((1 - y_true))

def t2(y_pred, y_true):
   return K.tf.count_nonzero(y_true)

def build_model():
    model = Sequential()
    model.add(Masking(mask_value=0, input_shape=(21, _Xtrain[0].shape[1])))
    model.add(LSTM(32, return_sequences=True))
    model.add(LSTM(64, return_sequences=False))
    model.add(Dense(1, activation='sigmoid'))
    rms = RMSprop(lr=.001, decay=.001)
    model.compile(loss='binary_crossentropy', optimizer=rms, metrics=[t1, t2])
    return model

model = build_model()

hist = model.fit(_Xtrain, _ytrain, epochs=1, batch_size=5, validation_data=(_Xtest, _ytest), shuffle=True)

The output of the above code is as follows:

Train on 1000 samples, validate on 200 samples Epoch 1/1 1000/1000 [==============================] - 5s - loss: 0.6958 - t1: 5.0000 - t2: 5.0000 - val_loss: 0.6975 - val_t1: 5.0000 - val_t2: 5.0000

So it appears that both methods t1 and t2 are producing the exact same output and it is baffling me. What could be going wrong and how could I get the complementary tensor to y_true?

Backstory: I was trying to write custom metrics (F1 score) in particular for my model. Keras does not seems to have those readily available. If anyone knows a better way, please help me get pointed to the right direction.

Upvotes: 1

Views: 173

Answers (1)

Boudewijn Aasman
Boudewijn Aasman

Reputation: 1256

One easy way to handle this issue is to use a callback instead. Following the logic from this issue, you could specify a metrics call back that calculates any metric using sci-kit learn. For example, if you wanted to calculate f1, you could do the following:

from keras.models import Sequential
from keras.layers import Dense, LSTM, Masking, Dropout
from keras.optimizers import SGD, Adam, RMSprop
import keras.backend as K
from keras.callbacks import Callback
import numpy as np

from sklearn.metrics import f1_score

_Xtrain = np.random.rand(1000,21,47)
_ytrain = np.random.randint(2, size=1000)

_Xtest = np.random.rand(200,21,47)
_ytest = np.random.randint(2, size=200)

class MetricsCallback(Callback):
    def __init__(self, train_data, validation_data):
        super().__init__()
        self.validation_data = validation_data
        self.train_data = train_data
        self.f1_scores = []
        self.cutoff = .5

    def on_epoch_end(self, epoch, logs={}):
        X_val = self.validation_data[0]
        y_val = self.validation_data[1]

        preds = self.model.predict(X_val)

        f1 = f1_score(y_val, (preds > self.cutoff).astype(int))
        self.f1_scores.append(f1)


def build_model():
    model = Sequential()
    model.add(Masking(mask_value=0, input_shape=(21, _Xtrain[0].shape[1])))
    model.add(LSTM(32, return_sequences=True))
    model.add(LSTM(64, return_sequences=False))
    model.add(Dense(1, activation='sigmoid'))
    rms = RMSprop(lr=.001, decay=.001)
    model.compile(loss='binary_crossentropy', optimizer=rms, metrics=['acc'])
    return model

model = build_model()

hist = model.fit(_Xtrain, _ytrain, epochs=2, batch_size=5, validation_data=(_Xtest, _ytest), shuffle=True,
                callbacks=[MetricsCallback((_Xtrain, _ytrain), (_Xtest, _ytest))])

Upvotes: 1

Related Questions