Reputation: 667
Im writing a program which uses simple neural network perception algorithm, I have problem with doing negative float arithmetic, when I do this
summ = x1*w1+x2*w2-t
summ gets the value of -2.77555756156e-17
this happens only when
x1 = 1.0
w1 = 0.2
x2 = 0.0
w2 = 0.1
t=0.2
If i do this in the python console everything is fine but in the program it gives me that strange value. I actually print each value and they are all normal, still the summ goes crazy.
here is the full code http://tinypaste.com/1b8f9dd6 the calculation is done at the top, in the first function.
Btw Im using python 2.7, I am confused as hell
Upvotes: 0
Views: 253
Reputation: 86818
The number -2.77555756156e-17
is -0.0000000000000000277555756156
, which as you can tell is approximately 0. Apparently you are expecting exatly zero, but floating point arithmetic is not exact for most numbers. That's just how math in computers works.
Of course, if you type in 1.0 * 0.2 + 0.0 * 0.1 - 0.2
then you get the correct answer, but your program is doing the calculation 1 * 0.19999999999999998 + 1 * 0 - 0.20000000000000001
.
Upvotes: 3
Reputation: 613461
A Python program will not give different answers to that obtained at the console, provided that the same calculation is performed on the same input data.
I think therefore that if you check carefully you will see that the values used in the program differ slightly from those used on the console. With the numbers you give the calculation will indeed return precisely zero so I think you'll find that the numbers used in the program are not quite what you believe them to be.
All the same, floating point is not exact. It is accurate only to finite precision. Once you start performing arithmetic you should make your comparisons to a specified tolerance rather than exact. To test for zero you should check that the absolute value of the result is less than some small value of your choosing.
Upvotes: 2
Reputation: 346486
From The Floating-Point Guide:
Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?
Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.
Upvotes: 3