Reputation: 136
I'm making a C# version of the Burg algorithm using an approach presented by Koen Vos in "A Fast Implementation of Burg’s Method".
I used the GNU Octave arburg
function to compare the results with.
The test results are almost the same when I use decimal
for internal variables in my C# code (accuracy is 0.0000000000001) but quite different when I use double
(accuracy is 0.01).
As I know GNU Octave uses 64-bit precision for floats, not 128-bit. Am I wrong?
/* Coefficients for comparision are taken from GNU Octave arburg()
* t = [0:2000];
* x = sin( 2 * pi() * t / (512 / 5.2));
* output_precision(16)
* [a, v, k] = arburg(x(1:512), 4)
*/
The C# code is more than 300 lines so I think it's better to not put it here.
I think either GNU Octave uses 128-bit precision under the hood or I have a mistake in my C# code and increasing the precision of calculations mitigates this mistake somehow.
The question is it possible that float data in GNU Octave (or Matlab) are 128-bit inside?
Upvotes: 3
Views: 3159
Reputation: 60660
Octave uses 64-bit floats by default, there is no way to force it to use a higher precision. It knows only double
(64-bit floats) and single
(32-bit floats).
Intel (and compatible) processors can do computation with 80-bit floats (in C this is long double
), but they don't support 128-bit floats. Some software might emulate 128-bit float for improved precision, but Octave is not one of them (nor is MATLAB).
Upvotes: 5