unleashed
unleashed

Reputation: 923

How does a back-propagation training algorithm work?

I've been trying to learn how back-propagation works with neural networks, but yet to find a good explanation from a less technical aspect.

How does back-propagation work? How does it learn from a training dataset provided? I will have to code this, but until then I need to gain a stronger understanding of it.

Upvotes: 12

Views: 21018

Answers (4)

Sufian Latif
Sufian Latif

Reputation: 13356

Back-propagation works in a logic very similar to that of feed-forward. The difference is the direction of data flow. In the feed-forward step, you have the inputs and the output observed from it. You can propagate the values forward to train the neurons ahead.

In the back-propagation step, you cannot know the errors occurred in every neuron but the ones in the output layer. Calculating the errors of output nodes is straightforward - you can take the difference between the output from the neuron and the actual output for that instance in training set. The neurons in the hidden layers must fix their errors from this. Thus you have to pass the error values back to them. From these values, the hidden neurons can update their weights and other parameters using the weighted sum of errors from the layer ahead.

A step-by-step demo of feed-forward and back-propagation steps can be found here.


Edit

If you're a beginner to neural networks, you can begin learning from Perceptron, then advance to NN, which actually is a multilayer perceptron.

Upvotes: 23

Alex Punnen
Alex Punnen

Reputation: 6244

It is easy to understand if you look at the computation graph which gives how the gradient of the Cost function or Loss function wrto weight is calculated by Chain Rule (which is basically what back propagation is) and then the mechanism of adjusting every weight in the neural network using gradient descent, where the gradient is the one calculated by BackPropogation. That is proportionally adjusting each weight, based on how strong each weight is affecting the final cost. It is too much to explain here -but here is the link to the chapter https://alexcpn.github.io/html/NN/ml/4_backpropogation/ from my book in making https://alexcpn.github.io/html/NN/ which tries to explain this in a simple way.

enter image description here

Upvotes: 1

Novak
Novak

Reputation: 4779

High-level description of the backpropagation algorithm

Backpropagation is trying to do a gradient descent on the error surface of the neural network, adjusting the weights with dynamic programming techniques to keep the computations tractable.

I will try to explain, in high-level terms, all the just mentioned concepts.

Error surface

If you have a neural network with, say, N neurons in the output layer, that means your output is really an N-dimensional vector, and that vector lives in an N-dimensional space (or on an N-dimensional surface.) So does the "correct" output that you're training against. So does the difference between your "correct" answer and the actual output.

That difference, with suitable conditioning (especially some consideration of absolute values) is the error vector, living on the error surface.

Gradient descent

With that concept, you can think of training the neural network as the process of adjusting the weights of your neurons so that the error function is small, ideally zero. Conceptually, you do this with calculus. If you only had one output and one weight, this would be simple -- take a few derivatives, which would tell you which "direction" to move, and make an adjustment in that direction.

But you don't have one neuron, you have N of them, and substantially more input weights.

The principle is the same, except instead of using calculus on lines looking for slopes that you can picture in your head, the equations become vector algebra expressions that you can't easily picture. The term gradient is the multi-dimensional analogue to slope on a line, and descent means you want to move down that error surface until the errors are small.

Dynamic programming

There's another problem, though -- if you have more than one layer, you can't easily see the change of the weights in some non-output layer vs the actual output.

Dynamic programming is a bookkeeping method to help track what's going on. At the very highest level, if you naively try to do all this vector calculus, you end up calculating some derivatives over and over again. The modern backpropagation algorithm avoids some of that, and it so happens that you update the output layer first, then the second to last layer, etc. Updates are propagating backwards from the output, hence the name.

So, if you're lucky enough to have been exposed to gradient descent or vector calculus before, then hopefully that clicked.

The full derivation of backpropagation can be condensed into about a page of tight symbolic math, but it's hard to get the sense of the algorithm without a high-level description. (It's downright intimidating, in my opinion.) If you haven't got a good handle on vector calculus, then, sorry, the above probably wasn't helpful. But to get backpropagation to actually work, it's not necessary to understand the full derivation.


I found the following paper (by Rojas) very helpul, when I was trying to understand this material, even if it's a big PDF of one chapter of his book.

http://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf

Upvotes: 13

James
James

Reputation: 8586

I'll try to explain without delving too much into code or math.

Basically, you compute the classification from the neural network, and compare to the known value. This gives you an error at the output node.

Now, from the output node, we have N incoming links from other nodes. We propagate the error to the last layer before the output node. Then propagate it down to the next layer (when there is more than one uplink, you sum the errors). And then recursively propagate to the first

To adjust the weights for training, for each node you basically do the following:

for each link in node.uplinks
  error = link.destination.error
  main = learningRate * error * node.output  // The amount of change is based on error, output, and the learning rate

  link.weight += main * alpha * momentum // adjust the weight based on the current desired change, alpha, and the "momentum" of the change.  

  link.momentum = main // Momentum is based on the last change. 

learningRate and alpha are parameters you can set to adjust how quickly it hones in on a solution vs. how (hopefully) accurately you solve it in the end.

Upvotes: 3

Related Questions