Foitn
Foitn

Reputation: 746

Float epsilon is different in c++ than c#

So small question, I've been looking into moving part of my C# code to C++ for performance reasons. Now when I look at my float.Epsilon in C# its value is different from my C++ value.

In C# the value, as described by microsoft is 1.401298E-45.

In C++ the value, as described by cppreferences is 1.19209e-07;

How can it be that the smallest possible value for a float/single can be different between these languages?

If I'm correct, the binary values should be equal in terms of number of bytes an maybe even their binary values. Or am I looking at this the wrong way?

Hope someone can help me, thanks!

Upvotes: 3

Views: 1428

Answers (3)

Dave Doknjas
Dave Doknjas

Reputation: 6542

From the link you referenced, you should use the value FLT_TRUE_MIN ("minimum positive value of float") if you want something similar to .NET Single.Epsilon ("smallest positive single value that is greater than zero").

Upvotes: 0

DownloadPizza
DownloadPizza

Reputation: 3466

C++

Returns the machine epsilon, that is, the difference between 1.0 and the next value representable by the floating-point type T. https://en.cppreference.com/w/cpp/types/numeric_limits/epsilon

C#

Represents the smallest positive Single value that is greater than zero. This field is constant. https://learn.microsoft.com/en-us/dotnet/api/system.single.epsilon?view=net-5.0

Conclusion

C# has next value from 0, C++ has next from 1. Two completely different things.

Edit: The other answer is probably more correct

Upvotes: 2

Matthew Watson
Matthew Watson

Reputation: 109862

The second value you quoted is the machine epsilon for IEEE binary32 values.

The first value you quoted is NOT the machine epsilon. From the documentation you linked:

The value of the Epsilon property is not equivalent to machine epsilon, which represents the upper bound of the relative error due to rounding in floating-point arithmetic.

From the wiki Variant Definitions section for machine epsilon:

The IEEE standard does not define the terms machine epsilon and unit roundoff, so differing definitions of these terms are in use, which can cause some confusion.

...

The following different definition is much more widespread outside academia: Machine epsilon is defined as the difference between 1 and the next larger floating point number.

The C# documentation is using that variant definition.

So the answer is that you are comparing two different types of Epsilon.

Upvotes: 4

Related Questions