Djordje Nikolic
Djordje Nikolic

Reputation: 433

Neural networks XOR returns incorrect output

I am wondering why my neural network doesn't work. I would like to say that I asked simular question to this, but I still have things I don't understand...

Code:

import numpy as np
inputs = np.array([
    [[0],[0]],
    [[1],[0]],
    [[0],[1]],
    [[1],[1]]
])

expected_output = np.array([
    [0],
    [1],
    [1],
    [0]
])

epochs = 100
lr = 0.2

hidden_weights = np.array([
    [0.2, 0.3],
    [0.4, 0.5]
])
hidden_bias = np.array([[0.3], [0.6]])

output_weights = np.array([[0.6, 0.7]])
output_bias = np.array([[0.5]])

def sigmoid(z):
    return 1/(1+np.exp(-z))

def sigmoid_derivative(z):
    return z * (1.0-z)

for _ in range(epochs):
    for index, input in enumerate(inputs):
        hidden_layer_activation = np.dot(hidden_weights, input)
        hidden_layer_activation += hidden_bias
        hidden_layer_output = sigmoid(hidden_layer_activation)

        output_layer_activation = np.dot(output_weights, hidden_layer_output)
        output_layer_activation += output_bias
        predicted_output = sigmoid(output_layer_activation)

        #Backpropagation
        output_errors = expected_output[index] - predicted_output
        hidden_errors = output_weights.T.dot(output_errors)
        d_predicted_output = output_errors * sigmoid_derivative(predicted_output)
        d_hidden_layer = hidden_errors * sigmoid_derivative(hidden_layer_output)

        output_weights += np.dot(d_predicted_output, hidden_layer_output.T) * lr
        hidden_weights += np.dot(d_hidden_layer, input.T) * lr

        output_bias += np.sum(d_predicted_output) * lr
        hidden_bias += np.sum(d_hidden_layer) * lr

# NOW THE TESTING,I pass 2 input neurons. One with value 1 and value 1
test = np.array([
    [[1], [1]]
])

hidden_layer_activation = np.dot(hidden_weights, test[0])
hidden_layer_activation += hidden_bias
hidden_layer_output = sigmoid(hidden_layer_activation)

output_layer_activation = np.dot(output_weights, hidden_layer_output)
output_layer_activation += output_bias
predicted_output = sigmoid(output_layer_activation)

print(predicted_output)
Result : [[0.5]] for inputs 1 and 1

Wanted : [[0]] for inputs 1 and 1

I have tested feed-forward-propagation and it works fine. Errors seems to be good.

I thought updating weights was the problem, but updating weights has the correct formula. This code is from the book "Make your own neural network and it's pretty much same thing I use:

self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 ­ final_outputs)), numpy.transpose(hidden_outputs))

Currently I only forward 1 input of 2 neurons at the time and calculate the errror. I would like it to stay that way very much, instead of forwarding entire test data over and over again.

Is there any way I can do that? Thank you in advance :)

Upvotes: 2

Views: 170

Answers (1)

CoMartel
CoMartel

Reputation: 3591

You have a small implementation error :

in the Backpropagation, you evaluate :

hidden_errors = output_weights.T.dot(output_errors)

but your hidden error must be evaluate based on the d_predicted_output, like so :

hidden_errors = output_weights.T.dot(d_predicted_output)

Also, you should decrease your learning rate and increase to number of epochs. 10000 epochs and lr = 0.1 works for me, but you can fine tune this.

Upvotes: 1

Related Questions