arpit joshi
arpit joshi

Reputation: 2144

Verifying a Perceptron Learning Example

I am trying to understand the perceptron learning algorithm via an example presented by a professor. Here is my understanding. Can any one check If my understanding is correct?

Lets say I have inputs

x1 x2 result(y)

1 3 +1

-1 -2 -1

1 -1 1

-2 1 -1

Now I use the below algorithm to get the weights

w0=0

1)y1(w0x1)<=0

hence w1=w0+y1*x1=[1,3]

2)y2(w1,x2)<=0

hence w2=w1+y2*x2=[3,-1]

3)y3(w2,x2)>=0

hence no iteration

4)y4(w2,x4)<=0

Hence w3=w2+y4*x4=[5,-2]

Hence now my weights are

x1 x2 result(y) weights

1 3 +1 [1,3]

-1 -2 -1 [3,-1]

1 -1 1 [3,-1]

-2 1 -1 [5,2]

Is my understanding right?or am i making mistake with the weights selection /or mistake while making the iteration .

Upvotes: 1

Views: 511

Answers (1)

Ami Tavory
Ami Tavory

Reputation: 76366

It looks like what you did is correct, but there are a number of comments:

  1. You state that, initially, w0 = 0. This does not make much sense, as you later add it to vectors of dimension 2. I'm guessing that you meant that w0 = [0, 0].

  2. FYI:

    1. A more general perceptron learning algorithm would not add/subtract the misclassified instances, but rather do this for a scaled version multiplied by some 0 < α ≤ 1. Your algorithm above uses α = 1.

    2. It's common to artificially prepend to the perceptron inputs, a constant 1 term. Hence, if the original inputs are vectors of dimension 2, you'd work on vectors with dimension 3, where the first item of each vector is 1.

Upvotes: 1

Related Questions