my_username
my_username

Reputation: 69

Why am getting the wrong output from my neural network?

I have written this code for a neural network and am not sure why exactly the output I am getting is incorrect.

I created a network with two 1x1 layers or neurons. The input is a random number between 1 and 0 and also set this to be the desired output for the network. These are the examples of input(left) and the receive(right) value:

[0.11631148733527708] [0.52613976]

[0.19471305546308992] [0.54367643]

[0.38620499751234083] [0.58595699]

[0.507207377588539]   [0.61203927]

[0.9552623183688456]  [0.70232115]

Here is my code:

main.py

from NeuralNetwork import NeuralNetwork
from random import random

net = NeuralNetwork((1, 1))
net.learning_rate = 0.01

while True:
    v1 = [random() for i in range(0, 1)]
    actual = v1

    net.input(v1)
    net.actual(actual)

    net.calculate()
    net.backpropagate()

    print(f"{v1} {net.output()}")

NeuralNetwork.py

import numpy as np
from math import e

def sigmoid(x):
    sig_x = 1 / (1 + e**-x)
    return sig_x

def d_sigmoid(x):
    sig_x = 1 / (1 + e**-x)
    d_sig_x = np.dot(sig_x.transpose(), (1 - sig_x))
    return d_sig_x

class NeuralNetwork():
    def __init__(self, sizes):
        self.activations = [np.zeros((size, 1)) for size in sizes]
        self.values = [np.zeros((size, 1)) for size in sizes[1:]]
        self.biases = [np.zeros((size, 1)) for size in sizes[1:]]

        self.weights = [np.zeros((sizes[i + 1], sizes[i])) for i in range(0, len(sizes) - 1)]
        self.activation_functions = [(sigmoid, d_sigmoid) for i in range(0, len(sizes) - 1)]

        self.last_layer_actual = np.zeros((sizes[-1], 1))
        self.learning_rate = 0.01

    def calculate(self):
        for i, activations in enumerate(self.activations[:-1]):
            activation_function = self.activation_functions[i][0]

            self.values[i] = np.dot(self.weights[i], activations) + self.biases[i]
            self.activations[i + 1] = activation_function(self.values[i])

    def backpropagate(self):
        current = 2 * (self.activations[-1] - self.last_layer_actual)
        last_weights = 1

        for i, weights in enumerate(self.weights[::-1]):
            d_activation_func = self.activation_functions[-i - 1][1]

            current = np.dot(last_weights, current)
            current = np.dot(current, d_activation_func(self.values[-i - 1]))

            weights_change = np.dot(current, self.activations[-i - 2].transpose())
            weights -= weights_change * self.learning_rate

            self.biases[-i - 1] -= current * self.learning_rate

            last_weights = weights.transpose()

    def input(self, network_input):
        self.activations[0] = np.array(network_input).reshape(-1, 1)

    def output(self):
        return self.activations[-1].ravel()

    def actual(self, last_layer_actual):
        self.last_layer_actual = np.array(last_layer_actual).reshape(-1, 1)

Upvotes: 1

Views: 70

Answers (1)

my_username
my_username

Reputation: 69

I just realized that the sigmoid function is not linear.

So the desired value of the single weight, in order for all the outputs to be equal to the inputs, cannot be constant

So simple

Upvotes: 1

Related Questions