Reputation: 173
I'm struggling to implement a single layered perceptron: http://en.wikipedia.org/wiki/Perceptron. My program, depending on the weights, either is lost in the learning loop or find wrong weights. As a test case I use logical AND. Could you please give a hind why my perceptron does not converge? This is for my own learning. Thanks.
# learning rate
rate = 0.1
# Test data
# logical AND
# vector = (bias, coordinate1, coordinate2, targetedresult)
testdata = [[1, 0, 0, 0], [1, 0, 1, 0], [1, 1, 0, 0], [1, 1, 1, 1]]
# initial weigths
import random
w = [random.random(), random.random(), random.random()]
print 'initial weigths = ', w
def test(w, vector):
if diff(w, vector) <= 0.1:
return True
else:
return False
def diff(w, vector):
from copy import deepcopy
we = deepcopy(w)
return dirac(sum(we[i]*vector[i] for i in range(3))) - vector[3]
def improve(w, vector):
for i in range(3):
w[i] += rate*diff(w, vector)*vector[i]
return w
def dirac(z):
if z > 0:
return 1
else:
return 0
error = True
while error == True:
discrepancy = 0
for x in testdata:
if not test(w, x):
w = improve(w, x)
discrepancy += 1
if discrepancy == 0:
print 'improved weigths = ', w
error = False
Upvotes: 0
Views: 940
Reputation: 438
(z > 0.5)
.Upvotes: 1
Reputation: 29047
It looks like you need an extra loop surrounding your for loop to iterate the improvement until your solutions converge (step 3 in the Wikipedia page you linked).
As it stands now, you give each training case exactly one chance to update the weights, so it has no chance to converge.
Upvotes: 1