Houssem Khatrchi
Houssem Khatrchi

Reputation: 1

I get different result for the same keras model

I trained a VGG16 with imagenet-weights to classfiy images with 4 classes.

Train data:3578 images belonging to 4 classes. Validation data:894 images belonging to 4 classes

Each time i run the code, i get one of this two accuracy value. val_acc: 1.0000 in first run. val_acc: 0.3364 in second run.

Any explication for this? because the difference between the results is to much large.

    train_dir = 'C:/Users/ucduq/Desktop/output1/train'
    validation_dir = 'C:/Users/ucduq/Desktop/output1/val'

        training_data_generator = ImageDataGenerator(
            rescale=1./255,
        #rotation_range=90,
        #horizontal_flip=True,
       # vertical_flip=True,
        #shear_range=0.9
        #zoom_range=0.9

        )

    validation_data_generator = ImageDataGenerator(rescale=1./255)

        IMAGE_WIDTH=150
        IMAGE_HEIGHT=150
    BATCH_SIZE=32
    input_shape=(150,150,3)

    training_generator = training_data_generator.flow_from_directory(
        train_dir,
        target_size=(IMAGE_WIDTH, IMAGE_HEIGHT),
        batch_size=BATCH_SIZE,
        class_mode="categorical")
    validation_generator = validation_data_generator.flow_from_directory(
        validation_dir,
        target_size=(IMAGE_WIDTH, IMAGE_HEIGHT),
        batch_size=BATCH_SIZE,
        class_mode="categorical",
        shuffle=False)


    from keras.applications import VGG16

    vgg_conv = VGG16(weights='imagenet',
                      include_top=False,
                      input_shape=(150, 150, 3))


    model = models.Sequential()


    model.add(vgg_conv)

    ### Add new layers
    model.add(layers.Flatten())
    model.add(layers.Dense(1024, activation='relu'))
    model.add(layers.Dropout(0.5))
    model.add(layers.Dense(4, activation='softmax'))

model.compile(loss="categorical_crossentropy",optimizer='adam',metrics=["accuracy"])

results = model.fit_generator(training_generator, steps_per_epoch=training_generator.samples/training_generator.batch_size, 
                                  epochs=100,
                                  callbacks=callbacks,
                                  validation_data=validation_generator, validation_steps=28)

    first run:

    Epoch 100/100

    111/110 [==============================] - 17s 152ms/step - loss: 1.3593 - acc: 0.3365 - val_loss: 1.3599 - val_acc: 0.3364


    second run:

    Epoch 100/100

    111/110 [==============================] - 18s 158ms/step - loss: 1.9879e-06 - acc: 1.0000 - val_loss: 5.2915e-06 - val_acc: 1.0000

Upvotes: 0

Views: 61

Answers (1)

Natthaphon Hongcharoen
Natthaphon Hongcharoen

Reputation: 2430

I assume that your data has a class that is 33% of the entire set? If that true, what happen in the first run: is the model didn't learn anything at all(acc: 0.3365).

This might because of incorrect using of data-augmentation, if the commented lines are what you use in the first run then they are the culprits.

The #shear_range=0.9 and #zoom_range=0.9 is too much, only one of this means that you discord 90% of each image so the model doesn't learn anything.

Upvotes: 1

Related Questions