joesan
joesan

Reputation: 15435

Make Your Own Neural Network Back Propagation

I'm reading through the Make Your Own Neural Network book and in the chapter where the author describes about the back propagation, I find myself confused. I would like to relate the way the author explains with an example where he shows a 2 node, 3 layer network sh shown in the image below:

Back Propagation

If I construct the Matrix representation of the above Neural Network for back propagation, it would look like this:

Matrix Back Propagation

Where WThidden_output is a Matrix transpose of the input weights, thus the Matrix representation in details is:

Matrix Detail

So if I want to now calculate the hidden Errors (e1_hidden and e2_hidden), I have the following:

e1_hidden = W11 * e1 + W12 * e2

e2_hidden = W21 * e1 + W22 * e2

But if I apply the values as given in the example., where e1 = 1.5 and e2 = 0.5, I do not get the e1_hidden = 0.7 and e2_hidden = 1.3

Where am I going wrong with my understanding / calculation? Any help?

Upvotes: 0

Views: 228

Answers (1)

Thomas
Thomas

Reputation: 111

You are describing the error back propagation based on simply multiplying by link weights, whereas the picture splits the error in proportion to the link weights. Both differences are described e.g. on this webpage:

enter image description here

In the picture, you see that the error is split in proportion to the link weights, e.g. e1 = 1.5 is split into 0.6 and 0.9 according to the weights of W11 = 1.0 and W21 = 3.0. (Note further, that the subscripts of the weights are also wrong in the picture because there is only W11 and all others are denoted by W12 ...)

This split up error is then added to the final error of the hidden layer, e.g.:

ehidden 1 = 0.6 + 0.1 = 0.7.

Upvotes: 1

Related Questions