Reputation: 22981
I would like to compute both the sine and co-sine of a value together (for example to create a rotation matrix). Of course I could compute them separately one after another like a = cos(x); b = sin(x);
, but I wonder if there is a faster way when needing both values.
Edit: To summarize the answers so far:
Vlad said, that there is the asm command FSINCOS
computing both of them (in almost the same time as a call to FSIN
alone)
Like Chi noticed, this optimization is sometimes already done by the compiler (when using optimization flags).
caf pointed out, that functions sincos
and sincosf
are probably available and can be called directly by just including math.h
tanascius approach of using a look-up table is discussed controversial. (However on my computer and in a benchmark scenario it runs 3x faster than sincos
with almost the same accuracy for 32-bit floating points.)
Joel Goodwin linked to an interesting approach of an extremly fast approximation technique with quite good accuray (for me, this is even faster then the table look-up)
Upvotes: 114
Views: 30860
Reputation: 23164
If you are willing to use a commercial product, and are calculating a number of sin/cos calculations at the same time (so you can use vectored functions), you should check out Intel's Math Kernel Library.
(dead link) It has a sincos function
According to that documentation, it averages 13.08 clocks/element on core 2 duo in high accuracy mode, which i think will be even faster than fsincos.
Upvotes: 4
Reputation: 53944
When you need performance, you could use a precalculated sin/cos table (one table will do, stored as a Dictionary). Well, it depends on the accuracy you need (maybe the table would be too big), but it should be really fast.
Upvotes: 14
Reputation: 567
The MSVC compiler may use the (internal) SSE2 functions
___libm_sse2_sincos_ (for x86)
__libm_sse2_sincos_ (for x64)
in optimized builds if appropriate compiler flags are specified (at minimum /O2 /arch:SSE2 /fp:fast). The names of these functions seem to imply that they do not compute separate sin and cos, but both "in one step".
For example:
void sincos(double const x, double & s, double & c)
{
s = std::sin(x);
c = std::cos(x);
}
Assembly (for x86) with /fp:fast:
movsd xmm0, QWORD PTR _x$[esp-4]
call ___libm_sse2_sincos_
mov eax, DWORD PTR _s$[esp-4]
movsd QWORD PTR [eax], xmm0
mov eax, DWORD PTR _c$[esp-4]
shufpd xmm0, xmm0, 1
movsd QWORD PTR [eax], xmm0
ret 0
Assembly (for x86) without /fp:fast but with /fp:precise instead (which is the default) calls separate sin and cos:
movsd xmm0, QWORD PTR _x$[esp-4]
call __libm_sse2_sin_precise
mov eax, DWORD PTR _s$[esp-4]
movsd QWORD PTR [eax], xmm0
movsd xmm0, QWORD PTR _x$[esp-4]
call __libm_sse2_cos_precise
mov eax, DWORD PTR _c$[esp-4]
movsd QWORD PTR [eax], xmm0
ret 0
So /fp:fast is mandatory for the sincos optimization.
But please note that
___libm_sse2_sincos_
is maybe not as precise as
__libm_sse2_sin_precise
__libm_sse2_cos_precise
due to the missing "precise" at the end of its name.
On my "slightly" older system (Intel Core 2 Duo E6750) with the latest MSVC 2019 compiler and appropriate optimizations, my benchmark shows that the sincos call is about 2.4 times faster than separate sin and cos calls.
Upvotes: 0
Reputation: 5086
There is very interesting stuff on this forum page, which is focused on finding good approximations that are fast: http://www.devmaster.net/forums/showthread.php?t=5784
Disclaimer: Not used any of this stuff myself.
Update 22 Feb 2018: Wayback Machine is the only way to visit the original page now: https://web.archive.org/web/20130927121234/http://devmaster.net/posts/9648/fast-and-accurate-sine-cosine
Upvotes: 8
Reputation: 14579
You may want to have a look at http://gruntthepeon.free.fr/ssemath/, which offers an SSE vectorized implementation inspired from CEPHES library. It has good accuracy (maximum deviation from sin/cos on the order of 5e-8) and speed (slightly outperforms fsincos on a single call basis, and a clear winner over multiple values).
Upvotes: 2
Reputation: 11
An accurate yet fast approximation of sin and cos function simultaneously, in javascript, can be found here: http://danisraelmalta.github.io/Fmath/ (easily imported to c/c++)
Upvotes: 1
Reputation: 35584
Modern Intel/AMD processors have instruction FSINCOS
for calculating sine and cosine functions simultaneously. If you need strong optimization, perhaps you should use it.
Here is a small example: http://home.broadpark.no/~alein/fsincos.html
Here is another example (for MSVC): http://www.codeguru.com/forum/showthread.php?t=328669
Here is yet another example (with gcc): http://www.allegro.cc/forums/thread/588470
Hope one of them helps. (I didn't use this instruction myself, sorry.)
As they are supported on processor level, I expect them to be way much faster than table lookups.
Edit:
Wikipedia suggests that FSINCOS
was added at 387 processors, so you can hardly find a processor which doesn't support it.
Edit:
Intel's documentation states that FSINCOS
is just about 5 times slower than FDIV
(i.e., floating point division).
Edit:
Please note that not all modern compilers optimize calculation of sine and cosine into a call to FSINCOS
. In particular, my VS 2008 didn't do it that way.
Edit:
The first example link is dead, but there is still a version at the Wayback Machine.
Upvotes: 54
Reputation: 862
There is a nice solution in the CEPHES library which can be pretty fast and you can add/remove accuracy quite flexibly for a bit more/less CPU time.
Remember that cos(x) and sin(x) are the real and imaginary parts of exp(ix). So we want to calculate exp(ix) to get both. We precalculate exp(iy) for some discrete values of y between 0 and 2pi. We shift x to the interval [0, 2pi). Then we select the y that is closest to x and write
exp(ix)=exp(iy+(ix-iy))=exp(iy)exp(i(x-y)).
We get exp(iy) from the lookup table. And since |x-y| is small (at most half the distance between the y-values), the Taylor series will converge nicely in just a few terms, so we use that for exp(i(x-y)). And then we just need a complex multiplication to get exp(ix).
Another nice property of this is that you can vectorize it using SSE.
Upvotes: 2
Reputation: 41
This article shows how to construct a parabolic algorithm that generates both the sine and the cosine:
DSP Trick: Simultaneous Parabolic Approximation of Sin and Cos
http://www.dspguru.com/dsp/tricks/parabolic-approximation-of-sin-and-cos
Upvotes: 3
Reputation: 1461
I have posted a solution involving inline ARM assembly capable of computing both the sine and cosine of two angles at a time here: Fast sine/cosine for ARMv7+NEON
Upvotes: 1
Reputation: 9962
Many C math libraries, as caf indicates, already have sincos(). The notable exception is MSVC.
And regarding look-up, Eric S. Raymond in the Art of Unix Programming (2004) (Chapter 12) says explicitly this a Bad Idea (at the present moment in time):
"Another example is precomputing small tables--for example, a table of sin(x) by degree for optimizing rotations in a 3D graphics engine will take 365 × 4 bytes on a modern machine. Before processors got enough faster than memory to demand caching, this was an obvious speed optimization. Nowadays it may be faster to recompute each time rather than pay for the percentage of additional cache misses caused by the table.
"But in the future, this might turn around again as caches grow larger. More generally, many optimizations are temporary and can easily turn into pessimizations as cost ratios change. The only way to know is to measure and see." (from the Art of Unix Programming)
But, judging from the discussion above, not everyone agrees.
Upvotes: 7
Reputation: 239011
If you use the GNU C library, then you can do:
#define _GNU_SOURCE
#include <math.h>
and you will get declarations of the sincos()
, sincosf()
and sincosl()
functions that calculate both values together - presumably in the fastest way for your target architecture.
Upvotes: 12
Reputation: 300539
You could compute either and then use the identity:
cos(x)2 = 1 - sin(x)2
but as @tanascius says, a precomputed table is the way to go.
Upvotes: 12
Reputation: 67838
Technically, you’d achieve this by using complex numbers and Euler’s Formula. Thus, something like (C++)
complex<double> res = exp(complex<double>(0, x));
// or equivalent
complex<double> res = polar<double>(1, x);
double sin_x = res.imag();
double cos_x = res.real();
should give you sine and cosine in one step. How this is done internally is a question of the compiler and library being used. It could (and might) well take longer to do it this way (just because Euler’s Formula is mostly used to compute the complex exp
using sin
and cos
– and not the other way round) but there might be some theoretical optimisation possible.
Edit
The headers in <complex>
for GNU C++ 4.2 are using explicit calculations of sin
and cos
inside polar
, so it doesn’t look too good for optimisations there unless the compiler does some magic (see the -ffast-math
and -mfpmath
switches as written in Chi’s answer).
Upvotes: 14
Reputation: 23164
Modern x86 processors have a fsincos instruction which will do exactly what you're asking - calculate sin and cos at the same time. A good optimizing compiler should detect code which calculates sin and cos for the same value and use the fsincos command to execute this.
It took some twiddling of compiler flags for this to work, but:
$ gcc --version
i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5488)
Copyright (C) 2005 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ cat main.c
#include <math.h>
struct Sin_cos {double sin; double cos;};
struct Sin_cos fsincos(double val) {
struct Sin_cos r;
r.sin = sin(val);
r.cos = cos(val);
return r;
}
$ gcc -c -S -O3 -ffast-math -mfpmath=387 main.c -o main.s
$ cat main.s
.text
.align 4,0x90
.globl _fsincos
_fsincos:
pushl %ebp
movl %esp, %ebp
fldl 12(%ebp)
fsincos
movl 8(%ebp), %eax
fstpl 8(%eax)
fstpl (%eax)
leave
ret $4
.subsections_via_symbols
Tada, it uses the fsincos instruction!
Upvotes: 43
Reputation: 17314
For a creative approach, how about expanding the Taylor series? Since they have similar terms, you could do something like the following pseudo:
numerator = x
denominator = 1
sine = x
cosine = 1
op = -1
fact = 1
while (not enough precision) {
fact++
denominator *= fact
numerator *= x
cosine += op * numerator / denominator
fact++
denominator *= fact
numerator *= x
sine += op * numerator / denominator
op *= -1
}
This means you do something like this: starting at x and 1 for sin and cosine, follow the pattern - subtract x^2 / 2! from cosine, subtract x^3 / 3! from sine, add x^4 / 4! to cosine, add x^5 / 5! to sine...
I have no idea whether this would be performant. If you need less precision than the built in sin() and cos() give you, it may be an option.
Upvotes: 2
Reputation: 78316
I don't believe that lookup tables are necessarily a good idea for this problem. Unless your accuracy requirements are very low the table needs to be very large. And modern CPUs can do a lot of computation while a value is fetched from main memory. This is not one of those questions which can be properly answered by argument (not even mine), test and measure and consider the data.
But I'd look to the fast implementations of SinCos that you find in libraries such as AMD's ACML and Intel's MKL.
Upvotes: 5
Reputation: 17132
Have you thought of declaring lookup tables for the two functions? You'd still have to "calculate" sin(x) and cos(x), but it'd be decidedly faster, if you don't need a high degree of accuracy.
Upvotes: 0
Reputation: 8018
When performance is critical for this kind of thing it is not unusual to introduce a lookup table.
Upvotes: 2