Reputation: 702
Is it possible to get an output from a neural network that is arbitrarily large? I know that the activation function doesn't need to be a sigmoid one, but whenever I try to use a linear one (i.e. not have an activation function), my outputs rapidly drop to near zero and everything falls apart.
As an example, is it possible to have a network where the output is double the input, even if the output is a non-integer larger than 1?
Sorry if this is a repeated question (it seems like it would be), but I couldn't find a thread that dealt with this exact problem. I will post code if needed, but there is a lot of it and this seems like a general problem...
Upvotes: 2
Views: 1135
Reputation: 66805
There is no limitation in the output values for as long as you use unbounded activation function in the output layer and that you do not limit your weights "too much" (regularization methods, such as weight decay force your network to have small values).
Upvotes: 3