Joel Blum
Joel Blum

Reputation: 7878

Javascript Number Representation

It's a famous example that in javascript console logging 0.1 + 0.2 yields

0.1 + 0.2 = 0.30000000000000004

The typical explanation for this is that it happens because of the way javascript represents numbers. I have 2 questions on that :

1)Why does javascript decide how to represent numbers - isn't it the "environment" (whatever it is compiling the code , be it the browser or something else?) job to decide how it wants to represent numbers?

2)Why is it impossible to fix this behavior to match most programming languages(java , c++ etc) . I mean - if this behavior isn't really good (and most would agree it isn't) , why is it impossible to fix . (Douglas Crockford showed other javascript flaws , for example weird behavior with 'this' , and it's been that way for 20 years .) . What is preventing javascript to fix these mistakes?

Upvotes: 0

Views: 688

Answers (1)

T.J. Crowder
T.J. Crowder

Reputation: 1074208

Why does javascript decide how to represent numbers - isn't it the "environment"

That would be chaos. By having JavaScript define the behavior of its fundamental types, we can rely on them behaving in that way across environments.

Okay, "chaos" is rather strong. I believe C never defined what float and double actually were other than some range limits, and it would be fair to say that C was and arguably is wildly successful, "chaos" and all. Still, the modern trend is to nail things down a bit more.

Why is it impossible to fix this behavior to match most programming languages(java , c++ etc)

This is the behavior of most modern programming languages. Most modern programming languages use IEEE-754 single- (often "float") and double- (often "double") precision floating point numbers:

Upvotes: 2

Related Questions