Dafydd Giddins
Dafydd Giddins

Reputation: 2316

Serializing a double with JsonSerializer results in JSON with lower precision .net 5.0

While upgrading from dotnet core 2.2 to dotnet 5.0 we experienced failing tests that depend on consistent serilaization of objects to generate a hash. I have tracked this down to the way the double type is serialized when passed to the JsonSerializer.Serialize method. During serialization we seem to be losing precision where a double is rounded to the next decimal point, but not every number is rounded.

For example

var d = 50.494329039350461;

var dAsText = System.Text.Json.JsonSerializer.Serialize(d);

//Value of dAsText is 50.49432903935046

When we deserialize we get the origial number back but we need to act on serialized data to generate our hash. Why has this behaviour changed between frameworks, is it a bug or is it intended? Are there any settings we can change to restore the previous implementation (while remaining on .net 5 of course). The same behaviour can be seen with Newtonsoft Json (I have raised an issue with them as well)

The behavior seem consistent on Windows 10x64 and in a Lunix Docker image. This is running in Visual Studio 2019 (latest update)

The problem can also be seen with these number:

50.494328391915907 30.316339899700989 50.494128852095287

Upvotes: 2

Views: 2697

Answers (1)

Evk
Evk

Reputation: 101453

I think it's the consequence of Floating-Point Parsing and Formatting improvements in .NET Core 3.0. There are many changes so worth reading the article, but in particular:

ToString(), ToString("G"), and ToString("R") will now return the shortest roundtrippable string

Not sure about System.Text.Json, but Newtonsoft Json uses "R" specifier when writing double (and this makes sense). Using d.ToString("R") reproduces this issue.

Note that your number, and the number it serializes to, are actually equal:

var d =  50.494329039350461d;
var d2 = 50.49432903935046d;
var sameThing = d == d2; // true
bool sameBytes = BitConverter.GetBytes(d).SequenceEqual(BitConverter.GetBytes(d2)); // true

So the last "1" digit is basically meaningless (due to double limited precision). As I understand from that article - now shortest roundtrippable value is returned, and d2 is shortest indeed.

Article also says:

For ToString("R"), there is no mechanism to fallback to the old behavior. The previous behavior would first try “G15” and then using the internal buffer would see if it roundtrips; if that failed, it would instead return “G17”.

And indeed using d.ToString("G17") returns the result you expect.

So, this is intended behaviour (previous behavior is considered a "bug" and fixed), and as far as I can tell - there are no settings to get back to previous behavior.

Upvotes: 1

Related Questions