zoecarver
zoecarver

Reputation: 6433

Model not learning

Background

I have a pretty simple script that creates a keras model designed to act like an XOR gate.

I generate 40000 datapoints in the get_data function. It creates two arrays; an input array containing 1s and 0s in some order, and an output which is either a 1 or a 0.

Issue

When I run the code it does not appear to learn and the results I get vary dramatically every time I train it.

Code

from keras import models
from keras import layers

import numpy as np

from random import randint


def get_output(a, b): return 0 if a == b else 1


def get_data ():
    data = []
    targets = []

    for _ in range(40010):
        a, b = randint(0, 1), randint(0, 1)

        targets.append(get_output(a, b))
        data.append([a, b])

    return data, targets


data, targets = get_data()

data = np.array(data).astype("float32")
targets = np.array(targets).astype("float32")

test_x = data[40000:]
test_y = targets[40000:]

train_x = data[:40000]
train_y = targets[:40000]

model = models.Sequential()

# input
model.add(layers.Dense(2, activation='relu', input_shape=(2,)))

# hidden
# model.add(layers.Dropout(0.3, noise_shape=None, seed=None))
model.add(layers.Dense(2, activation='relu'))
# model.add(layers.Dropout(0.2, noise_shape=None, seed=None))
model.add(layers.Dense(2, activation='relu'))

# output
model.add(layers.Dense(1, activation='sigmoid')) # sigmoid puts between 0 and 1

model.summary() # print out summary of model

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

res = model.fit(train_x, train_y, epochs=2000, batch_size=200, validation_data=(test_x, test_y)) # train
    
print 'predict: \n', test_x
print model.predict(test_x)

Output

[[0. 1.]
 [1. 1.]
 [1. 1.]
 [0. 0.]
 [1. 0.]
 [0. 0.]
 [0. 0.]
 [0. 1.]
 [1. 1.]
 [1. 0.]]
[[0.6629775 ]
 [0.00603844]
 [0.00603844]
 [0.6629775 ]
 [0.6629775 ]
 [0.6629775 ]
 [0.6629775 ]
 [0.6629775 ]
 [0.00603844]
 [0.6629775 ]]

Even without the dropout layers, I got very similar results.

Upvotes: 2

Views: 4151

Answers (1)

desertnaut
desertnaut

Reputation: 60400

There are several issues with your question.

To start with, your imports are rather unorthodox (irrelevant to your issue, true, but it helps sticking to some conventions):

from keras.models import Sequential
from keras.layers import Dense
import numpy as np

Second, you don't need some thousands of examples for the XOR problem; there are only four combinations:

X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([[0],[1],[1],[0]])

and that's all.

Third, for the very same reason, you can't actually have "validation" or "test" data with XOR; in the simplest approach (i.e. what you are arguably trying to do here), you can only test how well the model has learnt the function, using these 4 combinations (since there are no more!).

Fourth, you should start with a simple one-hidden layer model (with somewhat more than 2 units and no dropout), and then proceed gradually if needed:

model = Sequential()
model.add(Dense(8, activation="relu", input_dim=2))
model.add(Dense(1, activation="sigmoid"))

model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(X, y, batch_size=1, epochs=1000)

This should take your loss down to ~ 0.12; how well has it learnt the function?

model.predict(X)
# result:
array([[0.31054294],
       [0.9702552 ],
       [0.93392825],
       [0.04611744]], dtype=float32)

y
# result:
array([[0],
       [1],
       [1],
       [0]])

Is this good enough? Well, I don't know - the correct answer is always "it depends"! But you now have a starting point (i.e. a network that arguably learns something), from which you can proceed to further experiments...

Upvotes: 6

Related Questions