user8262209
user8262209

Reputation:

Which layers in neural networks have weights/biases and which don't?

I've heard several different varieties about setting up weights and biases in a neural network, and it's left me with a few questions:

Which layers use weights? (I've been told the input layer doesn't, are there others?)

Does each layer get a global bias (1 per layer)? Or does each individual neuron get its own bias?

Upvotes: 1

Views: 3785

Answers (1)

Joshua R.
Joshua R.

Reputation: 2302

In common textbook networks like a multilayer perceptron - each hidden layer and the output layer in a regressor, or up to the softmax, normalized output layer of a classifier, have weights. Every node has a single bias.

Here's a paper that I find particularly helpful explaining the conceptual function of this arrangement:

http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/

Essentially, the combination of weights and biases allow the network to form intermediate representations that are arbitrary rotations, scales, and distortions (thanks to nonlinear activation functions) for previous layers, ultimately linearizing the relationship between input and output.

This arrangement can also be expressed by the simple linear-algebraic expression L2 = sigma(W L1 + B) where L1 and L2 are activation vectors of two adjacent layers, W is a weight matrix, B is a bias vector, and sigma is an activation function, which is somewhat mathematically and computationally appealing.

Upvotes: 1

Related Questions