Reputation: 436
I need to increase computation speed of MATLAB code. For this purpose I rewrite my program on C language with Intel IPP library for operations with vectors. And here I got a problem: after some step main computation circle program in MATLAB and my C program go to different pathes of algorythm. It is happened because computations not absolutely equal and my program accumulate error in compare with MATLAB computations results. For this reason, my program doesn't compute correct gradient and the whole optimization algorythm doesn't count well. So I got a computation speed increase, but lost computation efficiency - when on 100th step MATLAB compute optimization error on 0.004, C program compute on 0.05 and this is important in my task.
I checked what function give me error, and what I found: common operations (like ippsAdd_64f_A53, ippsSub_64f_A53, ippsMul_f64_A53, ippsDiv_64f_A53 and usual C operations ,-,*,/) make equal to MATLAB results and sum error is zero, but math.h hyperbolic functions give a sum error on array with 75699 elements about -3..-5e-13. Intel functions ippsCosh_64f_A53 and others give a sum error about -1..-5e-14.
Do you know a library to compute high precision hyperbolic and exponent functions? Or maybe there are some compilator settings in Visual Studio 2012, which can help me?
All computations made in Ipp64f data type (double) in VS 2012 with installed Intel Parallel Studio XE 2013.
P.S.: Sum error was computed in MATLAB. I saved arrays from my C program to level 4 mat file and then imported in MATLAB where I summed difference between MATLAB array and imported array like sum(M_cosh - C_cosh);
Upvotes: 1
Views: 1028
Reputation: 78316
Not an answer, more of an extended comment:
You write
I need to increase computation speed of MATLAB code
and ask
Do you know a library to compute high precision trigonometric and exponent functions?
Yes, I know of several such libraries, but they implement floating-point numbers with more bits than are typically-provided on current CPUs (mainly 32- and 64-bit) and which implement, in software, arithmetic on these numbers. For your purpose of increasing computation speed, such libraries are useless, their increased precision is explicitly bought at the cost of increased execution time. For many other users that's a reasonable trade off.
I don't know of any widely-used or well-regarded libraries which implement precision-preserving algorithms on machine-numbers. There isn't space here to go into any detail, but for an introduction to the problem you could do worse than start reading about Kahan's summation algorithm.
The Mathworks are somewhat coy about revealing what algorithms Matlab implements. However most of the computational kernels of Matlab are written in C (or C++, I believe) and compiled into libraries. Many of them are now multi-threaded too. If you are trying to write code to outperform Matlab you will have to write multi-threaded, high-performance numerical code.
It wouldn't surprise me at all to learn that the algorithms that Matlab implements do have precision-preserving capabilities. The Mathworks are, after all, trying to offer the market a tool which will solve a wide range of problems without the user having to consider low-level issues such as whether or not machine-precision is good enough for a particular combination of problem and dataset.
Finally. It doesn't surprise me that your first attempts were unsuccessful, though beating Matlab for speed is impressive. And I look forward, sceptically, to being pleasantly surprised when you report success, a code of your own which outperforms Matlab in time and produces satisfactory results.
Upvotes: 1