Reputation: 437
I'm implementing an MLP
with Keras
, I notice that the loss function doesn't change during epochs.
I tried to varying learning rate and the weights initialization, but nothing changed.
Here's the code:
mlp = keras.models.Sequential()
# add input layer
mlp.add(
keras.layers.Input(
shape = (training_dataset.shape[1], )
)
)
# add hidden layer
mlp.add(
keras.layers.Dense(
units=training_dataset.shape[1] - 500,
input_shape = (training_dataset.shape[1] - 500,),
kernel_initializer=keras.initializers.RandomUniform(minval=-0.05, maxval=0.05, seed=None),
bias_initializer='zeros',
activation='relu')
)
# add output layer
mlp.add(
keras.layers.Dense(
units=1,
input_shape = (1, ),
kernel_initializer=keras.initializers.RandomUniform(minval=-0.05, maxval=0.05, seed=None),
bias_initializer='zeros',
activation='sigmoid')
)
# define SGD optimizer
sgd_optimizer = keras.optimizers.SGD(lr=0.00001, decay=1e-2)
print('Compiling model...\n')
mlp.compile(
optimizer=sgd_optimizer,
loss=listnet_loss
)
mlp.summary() # print model settings
generator = DataGenerator(training_dataset, training_dataset_labels[0:5000], groups_id_count, [])
# Training
with tf.device('/GPU:0'):
print('Start training')
mlp.fit(generator, steps_per_epoch=len(training_dataset),
epochs=50, verbose=1, workers=10,
use_multiprocessing=True,
callbacks=[KendallTauHistory(generator)])
This is my loss function:
def listnet_loss(real_labels, predicted_labels):
return -K.sum(get_top_one_probability(real_labels) * tf.math.log(get_top_one_probability(predicted_labels)))
How could I do?
Upvotes: 0
Views: 189