Reputation: 83
As I understand, most modern compilers automatically use SIMD instructions for loops where appropriate, if I set the corresponding compiler flag. Since the compiler can only use vectorization if it can be sure that doing so will not change the semantics of the program, it will not use vectorizations in cases where I actually know it's be safe, but the compiler for various reasons thinks its not.
Are there explicit vectorization instructions that I can use in plain C++ without libraries, which let me process vectorized data myself instead of relying on the compiler? I imagine it will look something like this:
double* dest;
const double* src1, src2;
// ...
for (uint32 i = 0; i < n; i += vectorization_size / sizeof(double))
{
vectorized_add(&dest[i], &src1[i], &src2[i]);
}
Upvotes: 8
Views: 2379
Reputation: 70506
TL;DR No guarantees, but KISS and you are likely to get highly optimized code. Measure and inspect the generated code before tinkering with it.
You can play with this on online compilers, e.g. gcc.godbolt will vectorize the following straightforward call to std::transform
for gcc 5.2 with -O3
#include <algorithm>
const int sz = 1024;
void f(double* src1, double* src2, double* dest)
{
std::transform(src1 + 0, src1 + sz, src2, dest,
[](double lhs, double rhs){
return lhs + rhs;
});
}
There was a similar Q&A earlier this week. The general theme seems to be that on modern processors and compilers, the more straightforward your code (plain algorithm calls), the more likely you'll get highly optimized (vectorized, unrolled) code.
Upvotes: 2
Reputation: 41454
Plain C++? No. std::valarray
can lead your compiler to the SIMD water, but it can't make it drink.
OpenMP is the least "library" library out there: it's more of a language extension than a library, and all major C++ compilers support it. While primarily and historically used for multicore parallelism, OpenMP 4.0 introduced SIMD-specific constructs which can at least urge your compiler to vectorize certain clearly-vectorizable procedures, even ones with apparently scalar subroutines. It can also help you identify aspects of your code which are preventing the compiler from vectorizing. (And besides... don't you want multicore parallelism too?)
double* dest;
const double* src1, src2;
#pragma omp simd
for (int i = 0; i < n; i++)
{
dest[i] = src1[i] + src2[i];
}
To go the last mile with reduced-precision operations, multilane aggregation, branch-free masking, etc. really requires an explicit connection to the underlying instruction set, and isn't possible with anything close to "plain C++". OpenMP can get you pretty far, though.
Upvotes: 4