W.H
W.H

Reputation: 11

Finding the value of machine epsilon using python

I wrote a simple code in python that gives me the same value of machine epsilon using the numpy command:np.finfo(float).eps The code is:

eps=1
while eps+1 != 1:
    eps /= 2
    print(eps)

But I didn't want to stop here ! I used smaller and smaller numbers to divide eps, for example:

eps=1
while eps+1 != 1:
    eps /= 1.1
    print (eps)

With this, I got a value of 1.158287085355336e-16 for epsilon. I noticed that epsilon was converging to a number, my last attempt at 1.0000001 gave me the value of 1.1102231190697707e-16.

Is this value closer to the real value of epsilon for my Pc? I think I'm not considering something important and my line of thinking is wrong.

Thank you in advance for the help !

Upvotes: 1

Views: 3774

Answers (3)

Herman Jaramillo
Herman Jaramillo

Reputation: 214

Your program is almost fine.

IT should be

eps=1
while eps+1 != 1:
   eps /= 2
print(2*eps)

The result is 2.220446049250313e-16

Which is the epsilon of the machine. You can check this with this piece of code:

import sys
sys.float_info.epsilon

The reason we should multiply by 2 in the last line is because you went 1 division too far inside the while loop.

Upvotes: 1

Floating Pundit
Floating Pundit

Reputation: 349

The term β€œmachine epsilon” is not consistently defined, so it's better to avoid that word and instead to say what specifically you're talking about:

  1. ulp(1), the magnitude of the least significant digit, or Unit in the Last Place, of 1. This is the distance from 1 to the next larger floating-point number. More generally, ulp(π‘₯) is the distance from π‘₯ to the next larger floating-point number in magnitude.

    • In binary64 floating-point, with 53 bits of precision, ulp(1) is 2⁻⁡² β‰ˆ 2.220446049250313 × 10⁻¹⁢.
    • In decimal64 floating-point, with 16 digits of precision, ulp(1) is 10⁻¹⁡.

    In general, for floating-point in radix 𝛽 with 𝑝 digits of precision (including the implicit 1 digit), ulp(1) = 𝛽1 βˆ’ 𝑝.

  2. The relative error bound, sometimes also called unit roundoff or u. A floating-point operation may round the outcome of a mathematical function such as π‘₯ + 𝑦, giving fl(π‘₯ + 𝑦) = (π‘₯ + 𝑦)β‹…(1 + 𝛿) for some relative error 𝛿.

    For basic arithmetic operations in IEEE 754 (+, βˆ’, *, /, sqrt), the result of computing the floating-point operation is guaranteed to be correctly rounded, which in the default rounding mode means it yields the nearest floating-point number, or one of the two nearest such numbers if π‘₯ + 𝑦 lies exactly halfway between them.

    • In binary64 floating-point, with 53 bits of precision, the relative error of an operation correctly rounded to nearest is at most 2⁻⁡³ β‰ˆ 1.1102230246251565 × 10⁻¹⁢.
    • In decimal64 floating-point, with 16 digits of precision, the relative error of an operation correctly rounded to nearest is at most 5 × 10⁻¹⁢.

    In general, when floating-point arithmetic in radix 𝛽 with 𝑝 digits of precision is correctly rounded to nearest, 𝛿 is bounded in magnitude by the relative error bound (𝛽/2) π›½βˆ’π‘.

What the Python iteration while 1 + eps != 1: eps /= 2 computes, starting with eps = 1., is the relative error bound in binary64 floating-point, since that's the floating-point that essentially all Python implementations use.

If you had a version of Python that worked in a different radix, say b, you would instead want to use while 1 + eps != 1: eps /= b. If you do eps /= 1.1 or eps /= 1.0001, you will get an approximation to the relative error bound erring on the larger side, with no particular significance to the result.

Note that sys.float_info.epsilon is ulp(1), rather than the relative error bound. They are always related: ulp(1)/2 is the relative error bound in every floating-point format.

Upvotes: 3

phetdam
phetdam

Reputation: 156

If you want the actual machine epsilon for a Python float on your PC, you can get it from the epsilon attribute of sys.float_info. By the way, on my x86-64 machine, numpy.finfo(float) gives me 2.220446049250313e-16, which is the expected machine epsilon for a 64-bit float.

Your intuition for trying to find the value eps such that 1 + eps != 1 is True is good, but machine epsilon is an upper bound on the relative error due to rounding in floating-point arithmetic. Not to mention, the inexact nature of floating-point arithmetic can be mystifying sometimes: note that 0.1 + 0.1 + 0.1 != 0.3 evaluates to True. Also, if I modify your code to

eps = 1
while eps + 9 != 9:
    eps = eps / 1.0000001
print(eps)

I get, after around maybe half a minute,

8.881783669690459e-16

The relative error in this case is 8.881783669690459e-16 / 9 = 9.868648521878288e-17.

Upvotes: 0

Related Questions