Adam12344
Adam12344

Reputation: 1053

Neural Network Not Learning, Converging on one output

I am trying to program a neural network and I am now testing it. I have simplified it down to 2 training examples with 2 inputs and 1 input.

Input : Output
1,0   :   1  
1,1   :   0

I cycle through forward and back-propogation 1,000 times and the network output always converges to 1 or 0, depending on where the initialized random weights start. No matter what input I put in, the output is the same. It does not learn.

I'm not sure how to seek help with out overloading you will all of my code, so I will post what I am doing:

Create random initial weights
For i = 1 to 1000
 For j = 1 to Samples in Training Set (2)
  Set activations (Sigmoid function)
  Forward-prop
  delta = sum of (deltas in next layer * weights connecting this node with next deltas) * act*(1-act)
  Weights = Weights + lambda(.05) * delta * x(i)

Is there anything that I seem to be doing wrong? Is there some/all of the code that I should post? Any suggestions on what else I should test? I have been testing everything by hand in Excel, and everything seems to work the way I expect (forward-prop, delta calculations, etc)

Upvotes: 1

Views: 1740

Answers (1)

Josh
Josh

Reputation: 63

If you are trying to train it to do XOR than you should use all four training example (0,0->0) etc. Do not equate calculating the outputs of your network with backpropagation, backpropagation refers to calculating error values for hidden layer neurons.

Backpropagation is an algorithm within it self, multilayer perceptrons use it to (loosely speaking) "infer" error values for hidden layer neurons. Backpropagation works for a single hidden layer neuron by summing each weight going foward from this neurons by the error values of the neurons they are connected to.

Train your network on all four examples, it should not take more than 10,000 epochs for it too converge well, but 1000 may be fine.

Another note: Regardless of the learning problem, neural networks (and pretty much every machine learning algorithm) will always perform better with more data (training examples).

Upvotes: 0

Related Questions