Reputation: 41
I'm using a Nueral Network to solve a regression problem. I've scaled all the values to fall in the interval [0,1].
Therefore, all the training inputs and outputs are in [0,1].
However, when I run the network for some test examples, the values are going below 0. How can I get over this? I want all the values to be in [0,1].
Upvotes: 0
Views: 154
Reputation: 29081
If by "scaled all the values in [0,1]" you mean normalization of the dataset, then all only the input vectors are in [0,1]. The output of a neuron by itself can take any value. The activation function is what maps the output to the [0,1] or [-1,1] interval. Since some outputs are below zero, your network is probably using the tansig function as activation. Change that to the logsig function, which has the same shape but gives output in [0,1] instead of [-1,1]
Upvotes: 1