Denis Rozimovschii
Denis Rozimovschii

Reputation: 448

Neural Network: A Perceptron for guessing the position of a point relative to a function

I am building a simple Perceptron with 3 inputs (x, y, bias=1)

He must guess whether the given point (x,y) is under or below a given function.

Basically, it's was inspired by this article

A supervised model of learning is used to train the network and the formula is the following:

learningConst = 0.01
error = desired - neuralAnswer
new_weights[i] = old_weights[i] + error * inputs[i] * learningConst

Still, after 100000 training tests it makes mistakes even on a simple function (2x+1)

Here is the code:

import numpy as np
import matplotlib.pyplot as plt

class Perceptron:
    def __init__(self, n):
        self.n = n #n is 2 in this case. 2 inputs [ x, y ] 
        self.weights = [np.random.uniform(-1, 1) for x in range(n)]
        self.learningConstant = 0.05

    # 1 added to the sum is the bias input    
    def feedForward(self, inputs):
        return 1 + sum([self.weights[i]*inputs[i] for i in range(self.n)])

    def activate(self, result):
        if result >= 0:
            return 1
        elif result < 0:
            return -1

    def train(self, inputs, expected):
        prediction = self.feedForward(inputs)
        answer = self.activate(prediction)
        error = expected - answer

        self.weights = [
            self.weights[i] + error * inputs[i] * self.learningConstant 
            for i in range(self.n)
        ]
        #print(self.weights)

    def predict(self, inputs):
        prediction = self.feedForward(inputs)
        return self.activate(prediction)

You can see here the results. Green color indicates that the perceptron guessed it right and the red ones indicate the mistakes. Funny thing - it tends to mistake on points below the line.

What should I do to improve the program?

Perceptron results

THE FULL CODE : CLICK

SOLUTION

My problem was using the bias input as a brute constant (line 14 of the full code) without allowing the algorithm to learn on it. So, my inputs now are [bias, x, y] and the weights are [w1, w3, w3] - the bias input now has its weight.

Another good idea is to save the weights somewhere else, so the algorithm doesn't have to start over each time you test the program.

2x + 1 Image

x^2 - 2x + 1 enter image description here

Upvotes: 3

Views: 677

Answers (2)

Marcin Możejko
Marcin Możejko

Reputation: 40516

The main problem with your solution is that your bias is always 1. It's not a parameter - it's constant. So this might be a problem because your model is pretty much weaker than classical perceptron model.

Upvotes: 2

Octoplus
Octoplus

Reputation: 483

Be sure that the data you wish to classify is linearly separable or the perceptron learning algorithm will never converge.

Upvotes: 0

Related Questions