netflux
netflux

Reputation: 3838

Activation function when training a single layer perceptron

When training a multi-layer neural network, using a sigmoidal activation function is necessary for it to learn efficiently.

Is there any advantage to using a sigmoidal activation function when training a single layer perceptron, or is a simple step (heaviside) function sufficient (or even preferable)?

I'm slowly getting my head around neural networks but any help with this would be appreciated.

Upvotes: 2

Views: 2215

Answers (1)

Georg Schölly
Georg Schölly

Reputation: 126165

Yes there is an advantage. The result can be something between 0 and 1 and doesn't have to be either YES or NO, but also MAYBE. Even for a single-neuron model it's better to have a non-step activation function.

If you need it depends on how your output is read out. Do you need binary (YES, NO) values or also something in-between?

I think you could also use a linear function, if you don't want to use a sigmoidal function.

Upvotes: 3

Related Questions