Vivek p.a
Vivek p.a

Reputation: 33

Updating weights in a neural network

I have been trying to code a neural network from scratch and have watched a couple of videos to see how it is implemented.

So I came across this guide that builds a simple neural network in Python.

X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
y = np.array([[0,1,1,0]]).T
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
for j in xrange(60000):
    l1 = 1/(1+np.exp(-(np.dot(X,syn0))))
    l2 = 1/(1+np.exp(-(np.dot(l1,syn1))))
    l2_delta = (y - l2)*(l2*(1-l2))
    l1_delta = l2_delta.dot(syn1.T) * (l1 * (1-l1))
    syn1 += l1.T.dot(l2_delta)
    syn0 += X.T.dot(l1_delta)

I find the last 2 lines confusing shouldn't it be syn1 -= l1.T.dot(l2_delta) and syn0 -= X.T.dot(l1_delta).

I thought that in gradient descent you subtract the slope, but it seems like here it is added. Is this gradient ascent?

Can someone please explain how the last 2 lines work?

Upvotes: 2

Views: 980

Answers (1)

Matthew Anderson
Matthew Anderson

Reputation: 348

You are correct: you subtract the slope in gradient descent.

This is exactly what this program does, subtract the slope. l1.T.dot(l2_delta) and X.T.dot(l1_delta) are the negative slope, which is why the author of this code uses += as opposed to -=.

Upvotes: 1

Related Questions