ale
ale

Reputation: 11830

Equations for outputs of nodes in hidden and output layers of a Neural Network

Hey guys, I'm new to neural networks.. I want to know how to come up with equations for outputs of nodes in the hidden and output layers of a neural network. I would like to know the answer to the below and how you did it. I haven't been able to find any approchable reading material on this either.

Assume I have a binary classification problem. Assume that I have a multi-layer neural network with one hidden layer. Assume that I have a sigmoid activation function given by f(x)=1/(1+e^-z). Does anyone know how I find the equation for the output of the nodes in the hidden layer and the output of the nodes in the output layer?

Thanks guys, any help would be great.

Upvotes: 1

Views: 1666

Answers (1)

Throwback1986
Throwback1986

Reputation: 6005

I reduced a three-layer NN to a set of equations (1 input node, 3 hidden nodes, 1 output node), and I ended up with those shown in the image. (Note: I'm assuming the image upload worked - they are blocked by company's morality filter).

  1. I labeled the output of each node as o, subscripted as {layer,neuron}.
  2. The weights were labeled as w with subscripts indicating {to_layer,neuron} and superscripts indicating {from_layer,neuron}.
  3. The bias terms b were subscripted as {layer, neuron}

As shown, the scaled NN input (Cet) was formulated as the output of the node on layer 1 (labeled as Eqn 3 in the pic). My sigmoidal activation function resembled yours (Eqn 4). From there, the output of layer 2, node 1 was computed (Eqn 5), then output of layer 2, node 2 (Eqn 6), then output of layer 2, node 3 (Eqn 7).

The output (BISt in my pic) was then computed as the weighted sum of the hidden layer activations - which was then passed through the activation function.

equation fig

This strategy worked well for my application.

Upvotes: 1

Related Questions