Reputation: 139
I've recently started working with machine learning using Tensorflow in a Google Colab notebook, working on a network to classify images of food.
My dataset is comprised of exactly 101,000 images and 101 classes - 1000 images per class. The network I developed following this Tensorflow Blog
The code I have developed is as follows:
#image dimensions
batch_size = 32
img_height = 50
img_width = 50
#80% for training, 20% for validating
train_ds = image_dataset_from_directory(data_dir,
shuffle=True,
validation_split=0.2,
subset="training",
seed=123,
batch_size=batch_size,
image_size=(img_height, img_width)
)
val_ds = image_dataset_from_directory(data_dir,
shuffle=True,
validation_split=0.2,
subset="validation",
seed=123,
batch_size=batch_size,
image_size=(img_height, img_width)
)
#autotuning, configuring for performance
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
#data augmentation layer
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(img_height,
img_width,
3)),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),
]
)
#network definition
num_classes = 101
model = Sequential([
data_augmentation,
layers.experimental.preprocessing.Rescaling(1./255),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(128, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(256, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes, activation='softmax')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
After training for 500 epochs, the accuracy seems to be increasing incredibly slow:
epoch 100: 2525/2525 - 19s 8ms/step - loss: 2.8151 - accuracy: 0.3144 - val_loss: 3.1659 - val_accuracy: 0.2549
epoch 500: 2525/2525 - 21s 8ms/step - loss: 2.7349 - accuracy: 0.0333 - val_loss: 3.1260 - val_accuracy: 0.2712
I have tried:
So far, the code above offers the best results, but I still wonder,
Is this behaviour expected? Is it a result of having such a big dataset? Or is there any flaw in my code that's possibly hindering the learning process?
Upvotes: 0
Views: 1193