Reputation: 11
I wrote a simple code in python that gives me the same value of machine epsilon using the numpy command:np.finfo(float).eps The code is:
eps=1
while eps+1 != 1:
eps /= 2
print(eps)
But I didn't want to stop here ! I used smaller and smaller numbers to divide eps, for example:
eps=1
while eps+1 != 1:
eps /= 1.1
print (eps)
With this, I got a value of 1.158287085355336e-16 for epsilon. I noticed that epsilon was converging to a number, my last attempt at 1.0000001 gave me the value of 1.1102231190697707e-16.
Is this value closer to the real value of epsilon for my Pc? I think I'm not considering something important and my line of thinking is wrong.
Thank you in advance for the help !
Upvotes: 1
Views: 3774
Reputation: 214
Your program is almost fine.
IT should be
eps=1
while eps+1 != 1:
eps /= 2
print(2*eps)
The result is 2.220446049250313e-16
Which is the epsilon of the machine. You can check this with this piece of code:
import sys
sys.float_info.epsilon
The reason we should multiply by 2 in the last line is because you went 1 division too far inside the while loop.
Upvotes: 1
Reputation: 349
The term βmachine epsilonβ is not consistently defined, so it's better to avoid that word and instead to say what specifically you're talking about:
ulp(1), the magnitude of the least significant digit, or Unit in the Last Place, of 1. This is the distance from 1 to the next larger floating-point number. More generally, ulp(π₯) is the distance from π₯ to the next larger floating-point number in magnitude.
In general, for floating-point in radix π½ with π digits of precision (including the implicit 1 digit), ulp(1) = π½1 β π.
The relative error bound, sometimes also called unit roundoff or u. A floating-point operation may round the outcome of a mathematical function such as π₯ + π¦, giving fl(π₯ + π¦) = (π₯ + π¦)β (1 + πΏ) for some relative error πΏ.
For basic arithmetic operations in IEEE 754 (+, β, *, /, sqrt), the result of computing the floating-point operation is guaranteed to be correctly rounded, which in the default rounding mode means it yields the nearest floating-point number, or one of the two nearest such numbers if π₯ + π¦ lies exactly halfway between them.
In general, when floating-point arithmetic in radix π½ with π digits of precision is correctly rounded to nearest, πΏ is bounded in magnitude by the relative error bound (π½/2) π½βπ.
What the Python iteration while 1 + eps != 1: eps /= 2
computes, starting with eps = 1.
, is the relative error bound in binary64 floating-point, since that's the floating-point that essentially all Python implementations use.
If you had a version of Python that worked in a different radix, say b
, you would instead want to use while 1 + eps != 1: eps /= b
.
If you do eps /= 1.1
or eps /= 1.0001
, you will get an approximation to the relative error bound erring on the larger side, with no particular significance to the result.
Note that sys.float_info.epsilon
is ulp(1), rather than the relative error bound.
They are always related: ulp(1)/2 is the relative error bound in every floating-point format.
Upvotes: 3
Reputation: 156
If you want the actual machine epsilon for a Python float on your PC, you can get it from the epsilon
attribute of sys.float_info
. By the way, on my x86-64 machine, numpy.finfo(float)
gives me 2.220446049250313e-16
, which is the expected machine epsilon for a 64-bit float.
Your intuition for trying to find the value eps
such that 1 + eps != 1
is True
is good, but machine epsilon is an upper bound on the relative error due to rounding in floating-point arithmetic. Not to mention, the inexact nature of floating-point arithmetic can be mystifying sometimes: note that 0.1 + 0.1 + 0.1 != 0.3
evaluates to True
. Also, if I modify your code to
eps = 1
while eps + 9 != 9:
eps = eps / 1.0000001
print(eps)
I get, after around maybe half a minute,
8.881783669690459e-16
The relative error in this case is 8.881783669690459e-16 / 9 = 9.868648521878288e-17
.
Upvotes: 0