DevLoverUmar
DevLoverUmar

Reputation: 14011

Why I'm getting zero accuracy in Keras binary classification model?

I have a Keras Sequential model taking inputs from csv files. When I run the model, its accuracy remains zero even after 20 epochs.

I have gone through these two stackoverflow threads (zero-accuracy-training and why-is-the-accuracy-for-my-keras-model-always-0) but nothing solved my problem.

As my model is binary classification, and I think it should not work like a regression model to make accuracy metric ineffective. Here is the Model

def preprocess(*fields):
    return tf.stack(fields[:-1]), tf.stack(fields[-1:]) # x, y


import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import feature_column

import pathlib

csvs =  sorted(str(p) for p in pathlib.Path('.').glob("My_Dataset/*/*/*.csv"))

data_set=tf.data.experimental.CsvDataset(
    csvs, record_defaults=defaults, compression_type=None, buffer_size=None,
    header=True, field_delim=',', use_quote_delim=True, na_value=""
)
print(type(data_set))

#Output: <class 'tensorflow.python.data.experimental.ops.readers.CsvDatasetV2'>

data_set.take(1)

#Output: <TakeDataset shapes: ((), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), (), ()), types: (tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32)>

validate_ds = data_set.map(preprocess).take(10).batch(100).repeat()
train_ds = data_set.map(preprocess).skip(10).take(90).batch(100).repeat()

model = tf.keras.Sequential([
    layers.Dense(256,activation='elu'),  
    layers.Dense(128,activation='elu'),  
    layers.Dense(64,activation='elu'),  
    layers.Dense(1,activation='sigmoid') 
])


model.compile(optimizer='adam',
            loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
            metrics=['accuracy'])    #have to find the related evaluation metrics


model.fit(train_ds,
        validation_data=validate_ds,
        validation_steps=5,
        steps_per_epoch= 5,
        epochs=20,
        verbose=1
        )

What I'm doing wrong?

Upvotes: 1

Views: 1845

Answers (3)

DevLoverUmar
DevLoverUmar

Reputation: 14011

With the help of other answers by Nikaido and Timbus Calin, I made a minor change and it's fixed.

def preprocess(*fields):
    features=tf.stack(fields[:-1])
    labels=tf.stack([int(x) for x in fields[-1:]])
    return features,labels  # x, y

Just changed the class label data-type to int in preprocessing, to make it work as a classifier.

Upvotes: 1

Nikaido
Nikaido

Reputation: 4629

Are you sure that yours is a classification task?

Because as I can see from your target variable, the one that you extract from the csv, the type is a float

#Output: <TakeDataset shapes: ((), (), ..., tf.float32)>

If it's a binary classification task, check also that the values in the target values are 0s and 1s. Otherwise the model will perform poorly

Something like this:

[0, 1, 0, 1, 0, 0, 0 ..., 1]

Because the crossentropy works with 0 and 1

That's the reason why you use the sigmoid as activation function, which will output values in the range [0, 1]

Also as already suggested you should set from_logits=False

Upvotes: 1

Timbus Calin
Timbus Calin

Reputation: 15043

The problem is here:

model = tf.keras.Sequential([
    layers.Dense(256,activation='elu'),  
    layers.Dense(128,activation='elu'),  
    layers.Dense(64,activation='elu'),  
    layers.Dense(1,activation='sigmoid') 
])


model.compile(optimizer='adam',
              #Here is the problem
              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=['accuracy'])    #Have to find the related evaluation metrics

You have two solutions:

  1. Either set from_logits=False

  2. Or leave layers.Dense(1) and (from_logits=True)

This is the reason why you have the problem, since from_logits = True implies that there is no activation function used.

Upvotes: 1

Related Questions