Rob
Rob

Reputation: 116

Tensorflow/Keras model output is constant

I am trying to train a CNN in using keras. The input is a 128x128x3 rbg image and output is a single value between 0 and 1 (this is not a classifier model). I have normalised the input. Initially, my model was achieving some reasonable results, getting the mean absolute error to < 0.1. As I tried to tweak the model slightly I found the loss would plateau very quickly to around 0.23. I investigated further and found that it was outputting the same value for every input.

So I reverted my code back to when it was working, but it was no longer working. I eventually found that about 90% of the time it will get stuck at this local minima, outputting a constant value (which I suspect is mean of the training reference values (0.39). The other 10% of the time it will behave nicely and regress down to an error of < 0.1. So it is basically giving qualitatively different behaviour randomly and desired results rarely. The strange thing is, is that I swear it was consistently working before.

I have tried:

def load_data(dir):
    csv_data = get_csv_data()
    xs = []
    ys = []
    for (name, y) in csv_data:
        path = DIR + dir + "/" + name
        img = tf.keras.preprocessing.image.load_img(path)
        xs.append(tf.keras.preprocessing.image.img_to_array(img) * (1 / 255.0))
        ys.append(normalize_output(float(y)))
    return np.array(xs).reshape(len(csv_data), IMAGE_DIM, IMAGE_DIM, 3), np.array(ys).reshape(len(csv_data), 1)

def gen_model():
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Conv2D(filters=64, kernel_size = (5, 5), activation='relu', input_shape=(IMAGE_DIM, IMAGE_DIM, CHAN_COUNT)))
    model.add(tf.keras.layers.MaxPool2D())
    model.add(tf.keras.layers.Conv2D(filters=64, kernel_size = (5, 5), activation='relu'))
    model.add(tf.keras.layers.MaxPool2D())
    model.add(tf.keras.layers.Conv2D(filters=128, kernel_size = (5, 5), activation='relu'))
    model.add(tf.keras.layers.MaxPool2D())
    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(256, activation='relu'))
    model.add(tf.keras.layers.Dropout(0.1))
    model.add(tf.keras.layers.Dense(128, activation='relu'))
    model.add(tf.keras.layers.Dropout(0.1))
    model.add(tf.keras.layers.Dense(64, activation='relu'))
    model.add(tf.keras.layers.Dropout(0.1))
    model.add(tf.keras.layers.LeakyReLU())
    model.add(tf.keras.layers.Dense(16, activation='sigmoid'))
    model.add(tf.keras.layers.LeakyReLU())
    model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
    model.compile(loss=keras.losses.MeanSquaredError(),
                  optimizer=tf.keras.optimizers.Adam(),
                  metrics=[keras.metrics.MeanAbsoluteError()])
    return model

def run():
    model = gen_model()

    xs, ys = load_data("output")
   
    generator = tf.keras.preprocessing.image.ImageDataGenerator(featurewise_center=False,
                                                                samplewise_center=False,
                                                                featurewise_std_normalization=False,
                                                                samplewise_std_normalization=False,
                                                                validation_split=0.1,
                                                                rotation_range=12,
                                                                horizontal_flip=True,
                                                                vertical_flip=True)

    model.fit(generator.flow(xs, ys, batch_size=32, shuffle=True),
              steps_per_epoch=len(xs) / 32,
              epochs = 10,
              use_multiprocessing=False)

Upvotes: 0

Views: 1017

Answers (1)

Nima Aghli
Nima Aghli

Reputation: 484

I rearranged activation on the layers. Please give it a try :

def gen_model():
   model = tf.keras.Sequential()
   model.add(tf.keras.layers.Conv2D(filters=64, kernel_size = (5, 5), activation='relu', input_shape=(IMAGE_DIM, IMAGE_DIM, CHAN_COUNT)))
   model.add(tf.keras.layers.MaxPool2D())
   model.add(tf.keras.layers.Conv2D(filters=64, kernel_size = (5, 5), activation='relu'))
   model.add(tf.keras.layers.MaxPool2D())
   model.add(tf.keras.layers.Conv2D(filters=128, kernel_size = (5, 5), activation='relu'))
   model.add(tf.keras.layers.MaxPool2D())
   model.add(tf.keras.layers.Flatten())
   model.add(tf.keras.layers.Dense(256, activation='relu'))
   model.add(tf.keras.layers.Dropout(0.1))
   model.add(tf.keras.layers.Dense(128, activation='relu'))
   model.add(tf.keras.layers.Dropout(0.1))
   model.add(tf.keras.layers.Dense(64, activation='relu'))
   model.add(tf.keras.layers.Dropout(0.1))
   model.add(tf.keras.layers.Dense(16, activation='relu'))
   model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
   model.compile(loss=keras.losses.MeanSquaredError(),
              optimizer=tf.keras.optimizers.Adam(),
              metrics=[keras.metrics.MeanAbsoluteError()])
   return model

Upvotes: 1

Related Questions