Reputation: 4510
I'm trying to compute the cosine of 4203708359 radians in C#:
var x = (double)4203708359;
var c = Math.Cos(x);
(4203708359 can be exactly represented in double precision.)
I'm getting
c = -0.57977754519440394
Windows' calculator gives
c = -0.579777545198813380788467070278
PHP's cos(double)
function (which internally just uses cos(double)
from the C standard library) on Linux gives:
c = -0.57977754519881
C's cos(double)
function in a simple C program compiled with Visual Studio 2017 gives
c = -0.57977754519881342
Here is the definition of Math.cos()
in C#: https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Math.cs#L57-L58
It appears to be a built-in function. I didn't dig (yet) in the C# compiler to check what this effectively compiles to but this is probably the next step.
In the meantime:
Why is the precision so poor in my C# example, and what can I do about it?
Is it simply that the cosine implementation in the C# compiler deals poorly with large integer inputs?
Edit 1: Wolfram Mathematica 11.0:
In[1] := N[Cos[4203708359], 50]
Out[1] := -0.57977754519881338078846707027800171954257546099993
Edit 2: I do need that level precision, and I'm ready to go pretty far in order to obtain it. I'd be happy to use an arbitrary precision library if there exists a good one that supports cosine (my efforts haven't led to one so far).
Edit 3: I posted the question on coreclr's issue tracker: https://github.com/dotnet/coreclr/issues/12737
Upvotes: 5
Views: 997
Reputation: 4510
Regarding this part of my question: "Why is the precision so poor in my C# example", coreclr developers answered here: https://github.com/dotnet/coreclr/issues/12737
In a nutshell, .NET Framework 4.6.2 (x86 and x64) and .NET Core (x86) appear to use Intel's x87 FP unit (i.e. fcos
or fsincos
) that gives inaccurate results while .NET Core on x64 (and PHP, Visual Studio 2017 and gcc) use more accurate, presumably SSE2-based implementations that give correctly rounded results.
Upvotes: 2
Reputation:
Presumably, the salts are stored along with each password. You could use the PHP code to calculate that cosine, and store that also with the password. I would then also add a password version number and default all those older passwords to be version 1. Then, in your C# code, for any new passwords, you implement a new hashing algorithm, and store those password hashes as passwords version 2. For any version 1 passwords, to authenticate, you do not have to calculate the cosine, you simply use the one stored along with the password hash and the salt.
The programmer of that PHP code was probably wanting to do a clever version of pepper. By storing that cosine, or pepper along with the salt and the password hashes, you basically change that pepper into a salt2. So, another versionless way of doing this would be to use two salts in your C# hashing code. For new passwords you could leave the second salt blank or assign it some other way. For old passwords, it would be that cosine, but it is already calculated.
Upvotes: 2
Reputation: 2243
I think I might know the answer. I'm pretty sure the sin/cos libraries don't take arbitrarily large numbers and calculate the sin/cos of them - they instead reduce them down to low numbers (between 0-2xpi?) and calculate them there. I mean, cos(x) = cos(x + 2xpi) = cos(x + 4xpi) = ...
Problem is, how is the program supposed to reduce your 10-digit number down? Realistically, it should figure out how many times it needs to multiply (2xpi) to get a value just below your number, then subtract that out. In your case, that's about 670 million.
So it's multiplying (2xpi) by this 9 digit value - so it's effectively losing 9 digits worth of significance from the math library's version of pi.
I ended up writing a little function to test what was going on:
private double reduceDown(double start)
{
decimal startDec = (decimal)start;
decimal pi = decimal.Parse("3.1415926535897932384626433832795");
decimal tau = pi * 2;
int num = (int)(startDec / tau);
decimal x = startDec - (num * tau);
double retVal;
double.TryParse(x.ToString(), out retVal);
return retVal;
//return start - (num * tau);
}
All this is doing is using decimal data type as a way of reducing down the value without losing digits of precision from pi - it still returns back a double. When I call it with a modification of your code:
var x = (double)4203708359;
var c = Math.Cos(x);
double y = reduceDown(x);
double c2 = Math.Cos(y);
MessageBox.Show(c.ToString() + Environment.NewLine + c2);
return;
... sure enough, the second one is accurate.
So my advice is - if you really need radians that high, and you really need the accuracy? Do something like that function above, and reduce the number down on your end in a way that you don't lose digits of precision.
Upvotes: 2