basickarl
basickarl

Reputation: 40444

Artificial Neural Network: Multi-Layer Perception ORDER/PROCCESS

I am currently learning pattern recognition. I have a 7 year background in programming, so, I think like a programmer.

The documentation on ANN's tell me nothing about what order everything is processed, or at least does not make it very clear. This is annoying as I don't know how to code the formulas.

I found a nice gif which I hope is correct. Can someone please give me a step by step process of a artificial neural network back propagation with for example 2 inputs, 1 hidden layer with 3 nodes, 2 outputs using the sigmoid.

Here is the gif.

Upvotes: 1

Views: 154

Answers (2)

Martin Skalský
Martin Skalský

Reputation: 166

As Emile said you go layer by layer from input to output and then you propagate error backwards again layer by layer.

From what you have said I expect that you are trying to make "object oriented" implementation where every neuron is object. But that is not exactly the fastest nor easiest way. The most usual implementation is done by Matrix operations where every layer is described by single Matrix (every row contains weights of one neuron plus threshold)

this is matlab code should do the trick:

output_hidden = logsig( hidden_layer * [inputs ; 1] );

inputs is column vector of inputs to layer

hidden_layer is matrix of weights plus one row which describes thresholds in hidden layer

output_hidden is again column vector of outputs of all neurons in layer which can be used as input into next layer

logsig is function which do sigmoid transform on all members of vector one by one

[inputs ; 1] creates new vector with 1 at the end of column vector inputs it is here because you need "virtual input" for thresholds to be multiplied with.

if you will think about it you will see that matrix multiplication will do exactly summation over all inputs multiplied by weight to output, you will also see that it doesn't matter in what order you do all the things. in order to implement it in any other language just find yourself good linear-algebra library. Implementing back-propagation is a bit trickier and you will need to tho some matrix transpositions (e.g. flipping matrix by diagonal)

Upvotes: 1

Emile
Emile

Reputation: 2230

As you can see in the gif, processing is per layer. As there are no connections within a layer, the processing order within a layer does not matter. Using the ANN (classifying) is done from input layer through hidden layers to the output layer. Training (using backpropagation) is done from output layer back to input layer.

Upvotes: 0

Related Questions