user3396293
user3396293

Reputation: 9

Backpropagation makes network worse

i am experimenting with neural networks. I have a network with 8 input neurons, 5 hidden and 2 output. When i let the network learn with backpropagation, sometimes, it produces worse result between single iterations of training. What can be the cause? It should not be implementation error, because i even tried using implementation from Introduction to Neural Networks for Java and it does exactly the same.

Upvotes: 0

Views: 180

Answers (1)

lejlot
lejlot

Reputation: 66805

Nothing is wrong. Back propagation is just a gradient optimization, and gradient methods do not have a guarantee of making error smaller in each iteration (you do have a guarantee that there exists a very small step size/learning rate which has such property, but in practise no way of finding it); furthermore you are probably updating weights after each sample making your training stochastic, which is even more "unstable" in this matter (as you do not really calculate the true gradient). However, if due to this, your method is not converging - think about proper scaling of your data as well as reducing the learning rate and probably adding the momentum term. These are just gradient-based optimization-related issues, not BP as such.

Upvotes: 1

Related Questions