Roman Starkov
Roman Starkov

Reputation: 61432

What is the structure of a Keras model if input_shape is omitted and why does it perform better?

I omitted the input_shape in the first layer of my Keras model by mistake. Eventually I noticed this and fixed it – and my model's performance dropped dramatically.

Looking at the structure of the model with and without input_shape, I discovered that the better-performing model has the output shape of multiple. Moreover, plotting it with plot_model shows no connections between the layers:

enter image description here

When it comes to performance, the model I understand (with input_shape) achieves a validation loss of 4.0513 (MSE) after 10 epochs with my test code (below), while the "weird" model manages 1.3218 – and the difference only increases with more epochs.

Model definition:

model = keras.Sequential()
model.add(keras.layers.Dense(64, activation=tf.nn.relu, input_shape=(1001,)))
#                                   add or remove this  ^^^^^^^^^^^^^^^^^^^
model.add(keras.layers.Dropout(0.05))
...

(never mind the details, this is just a model that demonstrates the difference in performance with and without input_shape)

So what is happening in the better-performing model? What is multiple? How are the layers really connected? How could I build this same model while also specifying input_shape?

Complete script:

import tensorflow as tf
from tensorflow import keras
import numpy as np
from collections import deque
import math, random

def func(x):
    return math.sin(x)*5 + math.sin(x*1.8)*4 + math.sin(x/4)*5

def get_data():
    x = 0
    dx = 0.1
    q = deque()
    r = 0
    data = np.zeros((100000, 1002), np.float32)
    while True:
        x = x + dx
        sig = func(x)
        q.append(sig)
        if len(q) < 1000:
            continue

        arr = np.array(q, np.float32)

        for k in range(10):
            xx = random.uniform(0.1, 9.9)
            data[r, :1000] = arr[:1000]
            data[r, 1000] = 5*xx #scale for easier fitting
            data[r, 1001] = func(x + xx)
            r = r + 1
            if r >= data.shape[0]:
                break

        if r >= data.shape[0]:
            break

        q.popleft()

    inputs = data[:, :1001]
    outputs = data[:, 1001]
    return (inputs, outputs)

np.random.seed(1)
tf.set_random_seed(1)
random.seed(1)

model = keras.Sequential()
model.add(keras.layers.Dense(64, activation=tf.nn.relu, input_shape=(1001,)))
#                                   add or remove this  ^^^^^^^^^^^^^^^^^^^
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(64, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(64, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(64, activation=tf.nn.relu))
model.add(keras.layers.Dropout(0.05))
model.add(keras.layers.Dense(1))

model.compile(
    loss = 'mse',
    optimizer = tf.train.RMSPropOptimizer(0.0005),
    metrics = ['mae', 'mse'])

inputs, outputs = get_data()

hist = model.fit(inputs, outputs, epochs=10, validation_split=0.1)

print("Final val_loss is", hist.history['val_loss'][-1])

Upvotes: 3

Views: 1927

Answers (1)

a_guest
a_guest

Reputation: 36249

TL;DR

The reason that the results are different is because the two models have different initial weights. The fact that one performs (significantly) better than the other is purely by chance and as @today mentioned the results they obtain are approximately similar.

Details

As the documentation for tf.set_random_seed explains, random operations use two seeds, the graph-level seed and the operation specific seed; tf.set_random_seed sets the graph-level seed:

Operations that rely on a random seed actually derive it from two seeds: the graph-level and operation-level seeds. This sets the graph-level seed.

Taking a look at the definition for Dense we see that the default kernel initializer is 'glorot_uniform' (let's only consider the kernel initializer here but the same holds for the bias initializer). Walking farther through the source code we'll eventually find out that this fetches the GlorotUniform with default arguments. Specifically the random number generator seed for that specific operation (namely weight initialization) is set to None. Now if we check where this seed is used, we find it is passed to random_ops.truncated_normal for example. This in turn (as do all random operations) fetches now the two seeds, one being the graph-level seed and the other the operation specific seed: seed1, seed2 = random_seed.get_seed(seed). We can check the definition of the get_seed function and we find that if the operation specific seed is not given (which is our case) then it is derived from properties of the current graph: op_seed = ops.get_default_graph()._last_id. The corresponding part of the tf.set_random_seed docs read:

  1. If the graph-level seed is set, but the operation seed is not: The system deterministically picks an operation seed in conjunction with the graph-level seed so that it gets a unique random sequence.

Now coming back to original problem, it makes a difference for the graph structure if input_shape is defined or not. Again looking at a bit of source code we find that Sequential.add builds the inputs and outputs of the network incrementally only if input_shape was specified; otherwise it just stores a list of layers (model._layers); compare model.inputs, model.outputs for the two definitions. The output is incrementally built by calling the layers directly which dispatches to Layer.__call__. This wrapper builds the layer, sets the layer's inputs and outputs and adds some metadata to the outputs; also it uses an ops.name_scope to group operations. We can see this from the visualization provided by Tensorboard (example for the simplified model architecture of Input -> Dense -> Dropout -> Dense):

Tensorboard visualization

Now in the case we didn't specify input_shape all the model has is a list of layers. Even after having called compile the model is actually not compiled (just attributes such as the optimizer are set). Instead it is compiled "on the fly" when for the first time data is passed in to the model. This happens in in model._standardize_weights: the model output is obtained via self.call(dummy_input_values, training=training). Checking this method we find that it builds the layers (note that the model is not yet built) and then computes the output incrementally by using Layer.call (not __call__). This leaves out all the meta data and also the grouping of operations and hence results in a different structure of the graph (though its computational operations are all the same). Again checking Tensorboard we find:

Tensorboard visualization

Expanding both graphs we would find that they contain the same operations, grouped differently together. However this has the effect that the keras.backend.get_session().graph._last_id is different for both definitions and hence results in a different seed for the random operations:

# With `input_shape`:
>>> keras.backend.get_session().graph._last_id
303
# Without `input_shape`:
>>> keras.backend.get_session().graph._last_id
7

Performance results

I used the OP's code with some modifications in order to have similar random operations:

  • Added the steps described here to ensure reproducibility in terms of randomization,
  • Set random seeds for Dense and Dropout variable initialization,
  • Removed validation_split since the splitting happens before "on the fly" compilation of the model without input_shape and hence might interfere with the seed,
  • Set shuffle = False since this might use a separate operation specific seed.

This is the complete code (in addition I performed export PYTHONHASHSEED=0 before running the script):

from collections import deque
from functools import partial
import math
import random
import sys
import numpy as np
import tensorflow as tf
from tensorflow import keras


seed = int(sys.argv[1])

np.random.seed(1)
tf.set_random_seed(seed)
random.seed(1)
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1,
                              inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
keras.backend.set_session(sess)


def func(x):
    return math.sin(x)*5 + math.sin(x*1.8)*4 + math.sin(x/4)*5


def get_data():
    x = 0
    dx = 0.1
    q = deque()
    r = 0
    data = np.zeros((100000, 1002), np.float32)
    while True:
        x = x + dx
        sig = func(x)
        q.append(sig)
        if len(q) < 1000:
            continue

        arr = np.array(q, np.float32)

        for k in range(10):
            xx = random.uniform(0.1, 9.9)
            data[r, :1000] = arr[:1000]
            data[r, 1000] = 5*xx #scale for easier fitting
            data[r, 1001] = func(x + xx)
            r = r + 1
            if r >= data.shape[0]:
                break

        if r >= data.shape[0]:
            break

        q.popleft()

    inputs = data[:, :1001]
    outputs = data[:, 1001]
    return (inputs, outputs)


Dense = partial(keras.layers.Dense, kernel_initializer=keras.initializers.glorot_uniform(seed=1))
Dropout = partial(keras.layers.Dropout, seed=1)

model = keras.Sequential()
model.add(Dense(64, activation=tf.nn.relu,
    # input_shape=(1001,)
))
model.add(Dropout(0.05))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dropout(0.05))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dropout(0.05))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dropout(0.05))
model.add(Dense(1))

model.compile(
    loss = 'mse',
    optimizer = tf.train.RMSPropOptimizer(0.0005)
)

inputs, outputs = get_data()
shuffled = np.arange(len(inputs))
np.random.shuffle(shuffled)
inputs = inputs[shuffled]
outputs = outputs[shuffled]

hist = model.fit(inputs, outputs[:, None], epochs=10, shuffle=False)
np.save('without.{:d}.loss.npy'.format(seed), hist.history['loss'])

With this code I'd actually expect to obtain similar results for both approaches however it turns out that they are not equal:

for i in $(seq 1 10)
do
    python run.py $i
done

Plot the mean loss +/- 1 std. dev.:

Performance per epoch

Initial weights and initial prediction

I verified that the initial weights and an initial prediction (before fitting) is the same for the two versions:

inputs, outputs = get_data()

mode = 'without'
pred = model.predict(inputs)
np.save(f'{mode}.prediction.npy', pred)

for i, layer in enumerate(model.layers):
    if isinstance(layer, keras.layers.Dense):
        w, b = layer.get_weights()
        np.save(f'{mode}.{i:d}.kernel.npy', w)
        np.save(f'{mode}.{i:d}.bias.npy', b)

and

for i in 0 2 4 8
do
    for data in bias kernel
    do
        diff -q "with.$i.$data.npy" "without.$i.$data.npy"
    done
done

Influence of Dropout

[ ! ] I checked the performance after removing all Dropout layers and in that case the performance is actually equal. So the crux seems to lie with the Dropout layers. Actually the performance of the models without Dropout layers is the same as for the model with Dropout layers but without specifying input_shape. So it seems that without input_shape the Dropout layers are not effective.

Basically the difference between the two versions is that one uses __call__ and the other uses call to compute the outputs (as explained above). Since performance is similar to without Dropout layers a possible explanation could be that the Dropout layers don't drop when input_shape is not specified. This could by caused by training=False, i.e. the layers don't recognize they are in training mode. However I don't see a reason why this would happen. Also we can consider again the Tensorboard graphs.

Specifying input_shape:

Dropout node

Not specifying input_shape:

Dropout node

where the switch also depends on the learning phase (as before):

Dropout node, connection to switch

To verify the training kwarg let's subclass Dropout:

class Dropout(keras.layers.Dropout):
    def __init__(self, rate, noise_shape=None, seed=None, **kwargs):
        super().__init__(rate, noise_shape=noise_shape, seed=1, **kwargs)

    def __call__(self, inputs, *args, **kwargs):
        training = kwargs.get('training')
        if training is None:
            training = keras.backend.learning_phase()
        print('[__call__] training: {}'.format(training))
        return super().__call__(inputs, *args, **kwargs)

    def call(self, inputs, training=None):
        if training is None:
            training = keras.backend.learning_phase()
        print('[call]     training: {}'.format(training))
        return super().call(inputs, training)

I obtain similar outputs for both version, however the calls to __call__ are missing when input_shape is not specified:

[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call]     training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call]     training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call]     training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[__call__] training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)
[call]     training: Tensor("keras_learning_phase:0", shape=(), dtype=bool)

So I suspect that the problem lies somewhere within __call__ but right now I can't figure out what it is.

System

I'm using Ubuntu 16.04, Python 3.6.7 and Tensorflow 1.12.0 via conda (no GPU support):

$ uname -a
Linux MyPC 4.4.0-141-generic #167-Ubuntu SMP Wed Dec 5 10:40:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ python --version
Python 3.6.7 :: Anaconda, Inc.
$ conda list | grep tensorflow
tensorflow                1.12.0          mkl_py36h69b6ba0_0
tensorflow-base           1.12.0          mkl_py36h3c3e929_0

Edit

I also had keras and keras-base installed (keras-applications and keras-preprocessing are required by tensorflow):

$ conda list | grep keras
keras                     2.2.4                         0  
keras-applications        1.0.6                    py36_0  
keras-base                2.2.4                    py36_0  
keras-preprocessing       1.0.5                    py36_0

After removing all, keras* and tensorflow*, then reinstalling tensorflow, the discrepancy vanished. Even after reinstalling keras the results remain similar. I also checked with a different virtualenv where tensorflow is installed via pip; also no discrepancy here. Right now I can't reproduce this discrepancy anymore. It must've been a broken installation of tensorflow.

Upvotes: 5

Related Questions