王晓海
王晓海

Reputation: 13

matrix multiplication in C and MATLAB , different result

i am using 4Rungekutta to solve the DGL(Ax + Bu = x_dot) in MATLAB and C, A is 5x5, x is 5x1, B 5x1, u 1x1, u is the output of sine function(2500 points), the output of 4Rungekutta in MATLAB and C are all the same until 45th iteration, but at 45th(in 2500 iterations) iteration of 4Rungekutta the output of A*x at 2th Step of 4Rungekutta are different , hier are the Matrix. i have printed them with 30 decimals A and x are the same in MATLAB and C

A = [0, 0.100000000000000005551115123126,0,0,0;
-1705.367199390822406712686643004417 -13.764624913971095665488064696547 245874.405372532171895727515220642090 0.000000000000000000000000000000 902078.458362009725533425807952880859; 
0, 0, 0, 0.100000000000000005551115123126, 0;
2.811622989796986438193471258273, 0, -572.221510883482778808684088289738, -0.048911651728553134921284595293 ,0;
0, 0, -0.100000000000000005551115123126 0, 0]

x = [0.071662614269441649028635765717 ;
45.870073568955461951190955005586;
0.000002088948888569741376840423;
0.002299524406171214990085571728;
0.000098982102875767145086331744]

but the results of A*x are not the same,the second element in MATLAB is-663.792187417201375865261070430279,in C is -663.792187417201489552098792046309

MATLAB
A*x = [ 4.587007356895546728026147320634
  -663.792187417201375865261070430279
  0.000229952440617121520692600622
  0.200180438762844026268084007825
  -0.000000208894888856974158859866];

C
A*x = [4.587007356895546728026147320634
 -663.792187417201489552098792046309
  0.000229952440617121520692600622
  0.200180438762844026268084007825
  -0.000000208894888856974158859866];

though the difference is small, but i need this result to do the finite difference, at that point the result would be more obvious

does anyone know why?

Upvotes: 0

Views: 157

Answers (1)

Luis Colorado
Luis Colorado

Reputation: 12668

How many digits do you consider you need? You have the same first 16 digits of each number equal, which is the aproximate amount of data a double normally can represent internally and store. You cannot get more, even if you force your printing routines to print more digits, they will print rubbish. What happens is that you have said to get say, 120 digits to your printing routines... and they will print those, normally multiplying the remainder (whatever it can be) As numbers are represented in base 2, you normally don't get zeros once passed the internal precission of the number... and the printing implementations don't have to agree on the digits printed once you don't have more bits represented in your number.

Suppose for a moment you have a hand calculator that only has 10 digits of precision. And you are given numbers of 120 digits. You begin to calculate and only get results with 10 digits... but you have been requested to print a report with 120 digit results. Well.... as the overal calculation cannot be done with more than 10 digits what can you do? you are using a calculator unable to give you the requested number of digits... and more, the number of base 10 digits in a 52bit significand is not a whole number of digits (there are 15.65355977452702215111442252567364 decimal digits in a 52bit significand). What can you do, you can fill with zeros (incorrect, most probably) you can fill those places with rubish (that will never affect the final 10 digits result) or you can go to Radio Shack and buy a 120 digit calculator. Floating point printing routines use a counter to specify how many times to go into a loop and get another digit, they normally stop when the counter reaches it's limit, but don't do any extra effort to know if you have got crazy and specified a large amount of digits... if you ask for 600 digits, you just get 600 loop iterations, but digits will be fake.

You should expect a difference of one part in 2^52 in a double number, as those are the number of binary digits used for the significand (this is aprox 2,220446049250313080847263336181641e-16, so you have to multiply this number by the one you have output to see how large the rounding error is, aproximately) if you multiply your number 663.792187417201375865261070430279 by that, you get 1.473914740073748177152126604805902e-13, which is an estimate of where in the number is the last valid digit of it. Probably the error estimate will be far larger due to the large number o multiplications and sums required to make a cell calculation. Anyway, a resolution of 1.0e-13 is very good (subatomic difference, should the values be lengths and units in meters).

EDIT

as an example, just consider the following program:

#include <stdio.h>
int main()
{
        printf("%.156f\n", 0.1);
}

if you run it you'll get:

0.100000000000000005551115123125782702118158340454101562500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

which indeed is the most (exact) aproximation to the internal representation of the number 0.1 that the machine can represent in base 2 floating point (0.1 happens to be a periodic number, when represented in base 2) Its representation is:

0.0001100110011(0011)*

so it cannot be represented exactly with 52 bits repeating the pattern 1100 indefinitely. At some point you have to cut, and the printf routine continues adding zeros to the right until it gets to the representation above (all the finite digit numbers in base 2 are representable as a finite number of digits in base 10, but the converse is not true, (because all the factors of 2 are in 10, but not all the factors of 10 are in 2).

If you divide the difference from 0.1 and 0.1000000000000000055511151231257827021181583404541015625 between 0.1 you'll get 5.55111512312578270211815834045414e-17 which is approximately 1/2^54 or one quarter approximately (roughly one fifth) of the limit 1/2^52 I showed to you above. It is the closest number representable with 52 bits to the number 0.1

Upvotes: 1

Related Questions