Shivan Dragon
Shivan Dragon

Reputation: 15229

Java, BigDecimal: How do I unit-test for rounding errors?

I will give a simplified example of my actual situation:

Let's say I have to implement some code in Java for calculating the Weighted arithmetic mean. I am given two arrays of floating point values (expressed as doubles), each of the same length, the first containing the values and the 2nd containing their respective weights.

Let's also say I make some implementation, that returns a floating point value (also a double) representing the Weighted arithmetic mean of the input values:

public static double calculateWeightedArithmeticMean(double[] values, 
        double[] weights) {

    if(values.length != weights.length) {
        throw new IllegalArgumentException();
    }

    if(values.length == 0) {
        return 0;
    }

    if(values.length == 1) {
        return new BigDecimal(values[0]).setScale(1, RoundingMode.HALF_UP).
                doubleValue();
    }

    BigDecimal dividend = BigDecimal.ZERO;
    BigDecimal divisor = BigDecimal.ZERO;
    for(int i = 0; i < values.length; i++) {
        dividend = dividend.add(new BigDecimal(values[i]).
                multiply(new BigDecimal(weights[i])));
        divisor = divisor.add(new BigDecimal(weights[i]));
    }
    if(dividend.compareTo(BigDecimal.ZERO) == 0) {
        return 0d;
    }
    return dividend.divide(divisor, 1, RoundingMode.HALF_UP).doubleValue();
}

I write a unit test passing a few values (like, 3 values + 3 weights). I first make a manual calculation of their Weighted arithmetic mean (using a calculator) and then write a unit test that checks that my code returns that value.

I believe that such a test is not pertinent for a situation where the number of values used is substantially larger, due to rounding errors. Maybe the code I've implemented works well for 3 values + 3 weights (for a given precision) because the rounding error is less than the precision in this case, but it's very possible that the rounding error becomes greater than the desired precision for 1000 values + 1000 weights.

My question is:

Upvotes: 3

Views: 3130

Answers (2)

user
user

Reputation: 745

Yes you should. Testing (should) always involves boundary values.

You can provide an epsilon boundary for which you assert that an answer is (approximately) correct.

Upvotes: 1

Aaron Digulla
Aaron Digulla

Reputation: 328724

When writing unit tests, you always have to give up somewhere. The trick is to give up when you're confident that you know enough :-)

In your case, a few simple test cases are:

  • Empty arrays
  • Create a second algorithm which uses precise arithmethics (like BigDecimal input arrays) to calculate error margins for selected inputs
  • Two arrays which are filled with the same values. That way, you know the result (it should be the same as the first pair alone).
    • Try to find a pair of numbers which cause large rounding errors (like 1/10, 0.1/1, 0.2/2, which all end up as 0.1 which can't be represented properly using double; see here)
  • Create input arrays which contain random variances (i.e. +- 1% * rand()). These should even out as you grow the input arrays.

When comparing the results, use assertEquals(double, double, double) where the first two are the values to compare and the last one is the precision (1e-3 for 3 digits after the comma).

And lastly, you need to use the algorithm and see how it behaves. When you find a problem, then add a test case for this specific case.

Upvotes: 2

Related Questions