Josiah Yoder
Josiah Yoder

Reputation: 3776

What is the minimum number of floating-point operations needed to get a one-cent error when computing monetary values with double?

A sequence of arithmetic operations (+,-,*,/,round) are performed on only monetary values of 1 trillion dollars or less (1e12 USD), rounded to the nearest penny. What is the minimum number double-precision floating point operations mirroring these operations to result in a one-penny or more rounding error on the value?

In practice, how many operations are safe to perform when computing results when rounding double-precision numbers?

This question is related to Why not use Double or Float to represent currency? but seeks a specific example of a problem with using double-precision floating point not currently found in any of the answers to that question.

Of course, double values MUST be rounded before comparisons such as ==, <, >, <=, >=, etc. And double values MUST be rounded for display. But this question asks how long you can keep double-precision values unrounded without risking a rounding error with realistic constraints on the sorts of calculations being performed.

This question is similar to the question, Add a bunch of floating-point numbers with JavaScript, what is the error bound on the sum?, but is less-constrained in that multiplication and division are allowed. Frankly, I may have constrained the question too little, because I'm really hoping for an example of a rounding error that is plausible to occur in ordinary business.


It has become clear in the extended discussion on the first answer that this question is ill-formulated because of the inclusion of "round" in the operations.

I feel the ability to occasionally round to the nearest cent is important, but I'm not sure how best to define that operation.

Similarly, I think rounding to the nearest dollar could be justified, e.g., in a tax environment where such rounding is (for who knows what reason) actually encouraged though not required in US Tax law.

Yet I find the current first answer to be dissatisfying because it feels as if cent rounding followed by banker's rounding would still produce the correct result.

Upvotes: 1

Views: 385

Answers (3)

chux
chux

Reputation: 154592

A sequence of arithmetic operations (+,-,*,/,round) are performed on only monetary values of 1 trillion dollars or less (1e12 USD), rounded to the nearest penny.

Premise has a problem as only $xxx,xxx,xxx,xxx,xxx.yy values where .yy is .00, .25, .50, .75 can meet that requirement with double. All other values are not truly rounded to the nearest penny, just something close. Let us assume money variables are always rounded to the nearest 0.01 as best they can be represented by double.

With + or - of money in the $trillion range using 1.0 as $1.00, the unit in the last place for 1.0e12 (1 trillion US $) is .0001220703125. Values that are to be to the penny could then be as much as 0.00006103515625 off or about 1/164 of a cent. It is easy to reason that adding up about 164 such values could incur a off-by-1 cent error as compared to decimal math.

With * of money, it make little sense to multiple 2 moneys, but money by a factor, say interest rate. Given an interest rate could be any double, a simple round_to_the_cent(money * rate) could readily be off by 1 cent as compared to money as a decimal.


Example off-by $0.01 with 1 multiply and 1 round

Consider a money calculation involving some M * rate that, on paper, has the product of $xxxxxx.yy5 and M and rate are not exactly representable with a double. On paper it rounds to $xxxxxx.yy0 or $xxxxxx.yy0 + 0.01. With double, it is a coin flip that it will match to the penny.

int main() {
  double money = 1000.05;
  double rate = 1.90; // 170 %
  double product = money * rate;
  printf("Decimal precise  : $1900.095\n");
  printf("Computer precise : $%.17f\n", product);
  printf("Decimal rounded  : $1900.10\n");  // Ties to even, or ties away
  printf("Computer rounded : $%.2f\n", product);
}

Output

Decimal precise  : $1900.095
Computer precise : $1900.09499999999979991
Decimal round    : $1900.10
Computer round   : $1900.09

IAC, wait a few years. Supposedly C2x will provide decimal floating point types.

Upvotes: 1

Arc
Arc

Reputation: 462

Just do a penny divided by two, round (which, by banker's rounding gives you zero) then multiply by 2, that is,

round(0.01 / 2, 2) * 2 

where the second parameter to round tells to round to integer pennies, then it resuls zero.

Note that there have been some disasters (see also here), due to incorrect rounding, including the index crash of the Vancouver Stock Exchange.

Furthermore, note that sub-penny bookkeeping is required in some financial applications, for example, in some stock exchanges as low as $0.0001, as this filing. Some additional info in this Quantitative Finance question, and this on this site.

Upvotes: 1

Eric Postpischil
Eric Postpischil

Reputation: 224576

At most three.

Presumably, IEEE-754 binary64, also known as “double precision” is used.

.29 rounds to 0.289999999999999980015985556747182272374629974365234375. Multiplying by 50 produces 14.4999999999999982236431605997495353221893310546875, after which round produces 14. However, with real-number arithmetic, .29•50 would be 14.5 and would round to 15. (Recall the round function is specified to round half-way cases away from zero.)

The preceding uses rounding to an integer. Here is an example using rounding to the nearest “cent,” that is, to two digits after the decimal point. A C implementation using IEEE-754 binary64 semantics with round-to-nearest ties-to-even with this program:

#include <math.h>
#include <stdio.h>


int main(void)
{
    printf(".55 -> %.99g.\n", .55);
    printf(".55/2 -> %.99g.\n", .55/2);
    printf("Rounded to two digits after decimal point -> %.2f.\n", .55/2);
    printf("1.15 -> %.99g.\n", 1.15);
    printf("1.15/2 -> %.99g.\n", 1.15/2);
    printf("Rounded to two digits after decimal point -> %.2f.\n", 1.15/2);
}

produces this output:

.55 -> 0.5500000000000000444089209850062616169452667236328125.
.55/2 -> 0.27500000000000002220446049250313080847263336181640625.
Rounded to two digits after decimal point -> 0.28.
1.15 -> 1.149999999999999911182158029987476766109466552734375.
1.15/2 -> 0.5749999999999999555910790149937383830547332763671875.
Rounded to two digits after decimal point -> 0.57.

The real-number results of the divisions would be .275 and .575. Any ordinary tie-breaker rule for round-to-nearest would round these in the same direction (upward produces .28 and .58, downward produces .27 and .57, to-even produces .28 and .58). But the IEEE-754 binary64 results produce results rounded in different directions, one up and one down. Therefore one of the floating-point results does not match the desired real-number result regardless of which tie-breaker rule is chosen.

Upvotes: 3

Related Questions