EgorCry
EgorCry

Reputation: 1

Deep Learning Classification CIFAR-10 Low Validation Accuracy

My validation score after 5 epochs is about 0.07 and I cannot understand what I am doing wrong. I am trying to learn generated image classifier.

In[1]:

from tensorflow.keras.datasets import cifar10

In[2]:

(features_train, label_train), (features_test, label_test) = cifar10.load_data()

In[3]:

features_train.shape

In[4]:

features_test.shape

In[5]:

batch_size = 16
img_height = 32
img_width = 32

In[6]:

from tensorflow.keras.preprocessing.image import ImageDataGenerator

In[7]:

train_img_gen = ImageDataGenerator(rescale=1./255,
                                  width_shift_range = 0.1,
                                  height_shift_range = 0.1,
                                  horizontal_flip = True)

In[8]:

val_img_gen = ImageDataGenerator(rescale = 1./255)

In[9]:

train_data_gen = train_img_gen.flow(features_train, label_train, batch_size = batch_size)

In[10]:

val_data_gen = train_img_gen.flow(features_test, label_test, batch_size = batch_size)

In[11]:

train_data_gen

In[12]:

import numpy as np
import tensorflow as tf
from tensorflow.keras import layers

In[13]:

np.random.seed(8)
tf.random.set_seed(8)

In[14]:

model = tf.keras.Sequential([layers.Conv2D(64, 3, activation = 'relu', input_shape = (img_height, img_width, 3)),
                            layers.MaxPooling2D(),
                            layers.Conv2D(128, 3, activation='relu'),
                            layers.MaxPooling2D(),
                            layers.Flatten(),
                            layers.Dense(128, activation='relu'),
                            layers.Dense(10, activation='softmax')])

In[15]:

optimizer = tf.keras.optimizers.Adam(0.001)

In[16]:

model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

In[17]:

model.summary()

In[18]:

model.fit(train_data_gen, 
          steps_per_epoch=len(features_train)//batch_size, 
          epochs=5, 
          validation_data=val_data_gen,
         validation_steps=len(features_test)//batch_size)

Upvotes: 0

Views: 278

Answers (1)

MD Mushfirat Mohaimin
MD Mushfirat Mohaimin

Reputation: 2066

The standard 'accuracy' metric is not supposed to work as the model's output is one-hot encoded but your labels are integer-encoded. So, try using sparse_categorical_accuracy when compiling your model, like this:

model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['sparse_categorical_accuracy'])

Also, It would be better to train for more epochs, to achieve a better performance. For example, 100 epochs:

model.fit(train_data_gen, 
          steps_per_epoch=len(features_train)//batch_size, 
          epochs=100, 
          validation_data=val_data_gen,
         validation_steps=len(features_test)//batch_size)

Another thing, the number of filters for the Conv2D layers seems too large for such a straight-forward image classification task. And it would also be better if you use more convolution layers, than using more number of filters. Again, this can cause the model to overfit, so try using a Dropout layer in the end to tackle that issue. Like this:

model = tf.keras.Sequential([
    layers.Conv2D(32, 3, activation = 'relu', input_shape = (img_height, img_width, 3)),
    layers.Conv2D(32, 3, activation='relu'),
    layers.MaxPooling2D(),
    layers.Conv2D(64, 3, activation='relu'),
    layers.Conv2D(64, 3, activation='relu'),
    layers.MaxPooling2D(),
    layers.Flatten(),
    layers.Dropout(0.25),
    layers.Dense(128, activation='relu'),
    layers.Dense(10, activation='softmax')])

Upvotes: 0

Related Questions