Reputation: 1316
Do you have any examples where writing the expression for the number may lead to floating point erros, while using scientific notation does not?
For example, I always give some examples where scientific notation is useful in engineering, such as specifying the young modulus of steel and the diameter of a bar:
E = 210e9; % Pa
d = 10e-3; % m
Some studentes, however, do not understand correctly the E notation, so many of them declare the variables as:
E = 210*10^9; % Pa
d = 10*10^-3; % m
Although I try to talk about how the E notation is easier to read, faster to type etc., some students still specify variables to big or to small by multiplying it to 10 raised to n power. I also try to explain that declaring the variables as such involves evaluating the power and multiplication, that may lead to floating point error arithmetics.
Do you know any examples where declaring the variables as such, instead of using the E notation, would lead to an erroneous declaration in the variables?
Upvotes: 3
Views: 138
Reputation: 63
I guess this is a matter of personal preferences, there is nothing wrong in writing explicitly the power of 10. The slight difference shown by James Tursa is interesting, but in real life it will never change anything.
Personally, I am not a fan of the scientific notation, here are some arguments.
Objective arguments:
• The letter e
is already used as a mathematical constant (base of natural logarithm), written upright, according to the norm ISO 80000-2:2019 Quantities and units - Part 2: Mathematics. Therefore, it should not be allowed to use e for something else, and this could lead to confusion.
But this could be solved by using E
.
• The scientific notation using e
or E
is not mentioned in the norm ISO 80000-2:2019 Quantities and units - Part 2: Mathematics.
Subjective arguments:
• I find it less easier to read, because the e
or E
is attached to the number: 210e9
vs 210 * 10^9
.
• If you need to multiply a variable by a power of 10 to change of scale, you don't have other choice than using the multiplication with a power of 10, the e
or E
notation does not work. So it makes sense to always use the same notation rather than picking between two different notations according to the context:
d = 20 * 10^-3; % m
d_mm = d * 10^3;
vs
d = 20e-3; % m
d_mm = d * 10^3;
• Finally, not related to MATLAB, but in Excel (I don't know about other spreadsheet software), if you type 20e9
, Excel will display 2.00E+10
but the cell content will be replaced with 20000000000
, which is very annoying to read and modify.
While if you enter = 10 * 10^9
, Excel will display 2.00E+10
but the cell content will still be = 10 * 10^9
, which is easy to read and modify.
The only advantage I see for the scientific notation is that it is compact, which can be useful in some cases if you are manipulating an spreadsheet.
Upvotes: 0
Reputation: 60434
I think the more convincing argument is that 210*10^9
is not portable. It works in MATLAB, but it won't work in a majority of programming languages.
In most languages that have inherited from C, ^
is the bitwise XOR operator (this is true in C, C++, Python, Java, and likely most other languages your students will run into in their careers).
In these languages, 210*10^9
evaluates to 2109, which is not the intended number.
Here's how you can write this number in different languages:
210e9
or 210 * pow(10, 9)
(or if you're on a POSIX system, 210 * pow10(9)
)210e9
or 210 * std::pow(10, 9)
210e9
or 210 * 10**9
210e9
or 210 * math.Pow(10, 9)
210e9
or 210 * Math.pow(10, 9)
210e9
or 210 * Math.pow(10, 9)
210e9
or 210 * 10^9
210e9
or 210 * 10**9
210e9
or 210 * 10**9
210e9
or 210 * 10**9
210e9
or 210 * 10.pow(9)
210e9
or 210 * pow(10.0, 9.0)
Looking at that list, there is one easy way to do it, that works the same in all those languages, or a hard way, that is always different. Which would you prefer?
Note, there are a few languages where the E-notation doesn't work, Wikipedia lists Simula, where you'd write 210&9
, and Mathematica, where you'd write 210*^9
. Mathematica is used extensively today, but is not really a programming environment as much as a symbolic math tool. Simula is an outdated language, I don't think many people use it today.
Upvotes: 2
Reputation: 2636
A small example might help your students. E.g., 0.3 is a simple case that gave me the same difference results in MATLAB, Python, Java, and C. Java and C are not directly part of this discussion since you have to call a raise-to-power function instead of a direct operator to get the result, but I wanted to see if their library code produced the same results and it did. Regardless, here is the MATLAB demo:
>> x1 = 0.3;
>> x2 = 3e-1;
>> x3 = 3*10^(-1);
>> x1 == x2
ans =
logical
1
>> x2 == x3
ans =
logical
0
>> format hex
>> x1
x1 =
3fd3333333333333
>> x2
x2 =
3fd3333333333333
>> x3
x3 =
3fd3333333333334
>> double(sym('0.3')) % best possible result using symbolic engine and converting
ans =
3fd3333333333333
>> 0.3 == 300000000000000e-15 % a somewhat absurd case but still matches
ans =
logical
1
>> 0.3 == 0.0000000000000003e15 % another absurd case but still matches
ans =
logical
1
The syntax where the parser used the 3e-1 notation got the best result. A slightly less accurate result off by 1 bit was obtained with the 3*10^(-1) notation. I image most if not all modern languages will always give you the closest IEEE double precision floating point bit pattern possible to the decimal string when using the e notation, whether the parsing happens at compile time like Java or C, or dynamically like MATLAB and Python. Library functions like raising-to-power are not in general required by language specs to always produce the closest possible floating point bit pattern result, so this alone is probably reason enough to steer your students to the e notation. Combined with subsequently multiplying by another value can easily result in differences as shown in this case.
I would hesitate to call the 10^(etc.) notation "erroneous" as you suggest, however. That is probably too strong a word to use, since floating point calculations in general typically can't be trusted in the trailing bits anyway. But since the e notation is very likely to give you the most accurate result to the intent, why lose accuracy (even if it is only a bit or so) with the 10^(etc.) notation if you don't have to?
A possible way to handle this: Dock point(s) on assignments/tests when students do the 10^(etc.) syntax, but allow them to change the code and turn it in again to get the point(s) back. That forces them to think about it and correct their code and learn without being overly punitive.
Upvotes: 3