Reputation: 9
I am wondering if a neural network is capable of regressing target values that are very close in value. For instance:
input [100 150 200 300]
output [0.99903 0.99890 0.99905 0.99895]
Or should the output or target data be processed?
Upvotes: 0
Views: 2066
Reputation: 3088
kwatford is right, normalize your data!
Theoretically neural networks could learn such a target. But we are working on real computers with inprecise real number representations. Now, think about this: you train your neural network and during the training the current prediction is this:
input [100 150 200 300]
output [0.99905 0.99890 0.99903 0.99895]
I just flipped the results of 100 and 200. Thus, the sum of squared errors will be 4e-10. The values you add to the weights of the neural network will be even smaller. When you use single precision floating point numbers, you will already have a problem with this number. An example in GNU Octave demonstrates this:
single(0.99905)+single(1e-10)
ans = 0.99905
That means in most ANN implementations it is not possible or: normalize your data. :)
Upvotes: 0
Reputation: 23905
The three rules of input/output values for a neural network:
Try a few normalization schemes on the data and see how far apart the output points are then. Don't forget to do it to the inputs as well, of course.
PCA can be helpful as well if your data has several dimensions, but this data is one dimensional.
Upvotes: 3