lmpeixoto
lmpeixoto

Reputation: 863

Keras binary classification squash output to zero/one

I have a feedforward DNN model with several layers to perform a binary classification. The output layer is 1 sigmoid unit and the loss function binary_crossentropy. As predictions I expect a vector with zeros/ones. For that I round the predictions and ravel them. Then I use sklearn score functions to calculate (f1score, rocauc, precision, recall, mcc). The problem is I'm getting a prediction vector that doesn't match a one-hot encoding as I pretend. Although if I use a mse loss function it works as pretended.

=> model creation function:

    def create_DNN_model(self, verbose=True):
        print("Creating DNN model")
        fundamental_parameters = ['dropout', 'output_activation', 'optimization', 'learning_rate',
                              'units_in_input_layer',
                              'units_in_hidden_layers', 'nb_epoch', 'batch_size']
        for param in fundamental_parameters:
            if self.parameters[param] == None:
                print("Parameter not set: " + param)
                return
        self.print_parameter_values()
        model = Sequential()
        # Input layer
        model.add(Dense(self.parameters['units_in_input_layer'], input_dim=self.feature_number, activation='relu'))
        model.add(BatchNormalization())
        model.add(Dropout(self.parameters['dropout']))
        # constructing all hidden layers
        for layer in self.parameters['units_in_hidden_layers']:
            model.add(Dense(layer, activation='relu'))
            model.add(BatchNormalization())
            model.add(Dropout(self.parameters['dropout']))
        # constructing the final layer
        model.add(Dense(1))
        model.add(Activation(self.parameters['output_activation']))
        if self.parameters['optimization'] == 'SGD':
            optim = SGD()
            optim.lr.set_value(self.parameters['learning_rate'])
        elif self.parameters['optimization'] == 'RMSprop':
            optim = RMSprop()
            optim.lr.set_value(self.parameters['learning_rate'])
        elif self.parameters['optimization'] == 'Adam':
            optim = Adam()
        elif self.parameters['optimization'] == 'Adadelta':
            optim = Adadelta()
        model.add(BatchNormalization())
        model.compile(loss='binary_crossentropy', optimizer=optim, metrics=[matthews_correlation])
        if self.verbose == 1: str(model.summary())
        print("DNN model sucessfully created")
        return model

=> the evaluation function:

    def evaluate_model(self, X_test, y_test):
        print("Evaluating model with hold out test set.")
        y_pred = self.model.predict(X_test)
        y_pred = [float(np.round(x)) for x in y_pred]
        y_pred = np.ravel(y_pred)
        scores = dict()
        scores['roc_auc'] = roc_auc_score(y_test, y_pred)
        scores['accuracy'] = accuracy_score(y_test, y_pred)
        scores['f1_score'] = f1_score(y_test, y_pred)
        scores['mcc'] = matthews_corrcoef(y_test, y_pred)
        scores['precision'] = precision_score(y_test, y_pred)
        scores['recall'] = recall_score(y_test, y_pred)
        scores['log_loss'] = log_loss(y_test, y_pred)
        for metric, score in scores.items():
            print(metric + ': ' + str(score))
        return scores

=> the predicted vector 'y_pred':

[-1. -1.  2. -0.  2. -1. -1. -1.  2. -1. -1.  2. -1.  2. -1.  2. -1. -1.  2. -1.  2. -1. -1.  2. -1.  2.  2.  2. -1. -1.  2.  2.  2.  2. -1. -1. 2.  2.  2. -1.  2.  2. -1.  2. -1. -1. -1.  1. -1. -1. -1.]

Thanks in advance.

Upvotes: 2

Views: 1248

Answers (1)

Lukasz Tracewski
Lukasz Tracewski

Reputation: 11377

You are using linear activation (default) in the output layer, whereas you should take sigmoid.

Upvotes: 1

Related Questions