이윤형
이윤형

Reputation: 11

Tensorflow and Keras show a little different result even though I build exactly same models using same layer modules

I'm using both Tensorflow and Keras, and I figure out they show a different result. There are already similar questions but I am a little different from them.

In my case, there is only a little difference in loss and accuracy, and I use exactly the same 'tf.keras.layers' modules. I think the only difference is AdamOptimizer and how to train the methods.

  1. tf.train.AdamOptimizer vs. tf.keras.optimizers.Adam
  2. tf.keras.models.fit vs. sess.run(train_optimizer)

I checked that defaults of adam optimizers are same. I think that the difference is not caused by its randomness because I got similar results when I run the keras model a couple of times.

Here is my code

Keras Model

# Build model and train
X = tf.keras.layers.Input(shape=(sentence_size,), name='X')

embedded_X = tf.keras.layers.Embedding(voca_size,
                                       embedding_dim,
                                       weights = [embedding_matrix],
                                       input_length = sentence_size,
                                       trainable=True)(X)

hidden_states = tf.keras.layers.Bidirectional(tf.keras.layers.GRU(256, return_sequences=True))(embedded_X)
l_pool = tf.keras.layers.GlobalMaxPooling1D()(hidden_states)
preds = tf.keras.layers.Dense(1, activation = 'sigmoid')(l_pool)

model = tf.keras.models.Model(X, preds)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit( tokenized_train, y_train, shuffle=False, epochs=3, batch_size=32, validation_data= (tokenized_val, y_val))

Tensorflow Model

# Build model  
X = tf.placeholder(tf.float32, [None, sentence_size])

embedded_X = tf.keras.layers.Embedding(voca_size,
                                       embedding_dim,
                                       weights = [embedding_matrix],
                                       input_length = sentence_size,
                                       trainable=True)(X)

hidden_states = tf.keras.layers.Bidirectional(tf.keras.layers.GRU(256, return_sequences=True))(embedded_X)
l_pool = tf.keras.layers.GlobalMaxPooling1D()(hidden_states)
_preds = tf.keras.layers.Dense(1, activation = 'sigmoid')(l_pool)

labels = tf.placeholder(tf.float32, [None, 1])
_loss = tf.reduce_mean( tf.keras.losses.binary_crossentropy(labels, _preds) )
_acc = tf.reduce_mean( tf.cast(tf.equal(labels, tf.round(_preds)), tf.float32) )
_train_op = tf.train.AdamOptimizer().minimize(_loss)

# Hyper parameters and loss_acc print function
from math import ceil

epochs = 3
batch_size = 32
steps_per_epoch = ceil( len(tokenized_train) / batch_size)

def loss_acc(sess, _loss, _preds, inputs, targets):
    batch_size = len(inputs)//100
    steps_per_epoch = ceil( len(inputs) / batch_size )

    data = tf.data.Dataset.from_tensor_slices((inputs, targets)).batch(batch_size).make_one_shot_iterator()
    next_batch = data.get_next()

    acc = 0
    loss = 0

    for batch in range(steps_per_epoch):
        x, y = sess.run(next_batch)
        l, a = sess.run([_loss, _acc], feed_dict={X:x, labels:y})

        acc += a/100
        loss += l/100

    return loss, acc

# Train model
data = tf.data.Dataset.from_tensor_slices((tokenized_train, y_train)).batch(batch_size).repeat().make_one_shot_iterator()
next_batch = data.get_next()
sess.run(tf.global_variables_initializer())

for epoch in range(epochs):
    for step in range(steps_per_epoch):
        x, y = sess.run(next_batch)
        batch_loss, batch_acc, _ = sess.run([_loss, _acc, _train_op], feed_dict={X:x, labels:y})
        if step%125 == 0:
            print('\nBatch: %d' %step)
            print(batch_loss, batch_acc)

    train_loss, train_acc = loss_acc(sess, _loss, _preds, tokenized_train, y_train)
    val_loss, val_acc = loss_acc(sess, _loss, _preds, tokenized_val, y_val)
    print("\nTrain loss: %.4f" %train_loss)
    print("Train acc: %.4f" %train_acc)
    print("Val loss: %.4f" %val_loss)
    print("Val acc: %.4f" %val_acc)

Results

Keras result
Tensorflow result

Thank you.

Upvotes: 1

Views: 670

Answers (1)

ixeption
ixeption

Reputation: 2050

As long as you don´t have exactly the same initialized weights, you can not eliminate the problem of being non-deterministic. Your results are not varying very much and as you can see in your loss values, the starting points are quite different. Moreover 3 epochs are not very much, try to train more epochs and then compare the results. If your models overfit add some regularization.

Upvotes: 1

Related Questions