Reputation: 78
I am attempting to use C++17 parallel algorithms with containers holding non-fundamental types as illustrated in the minimal example below, compiled using GCC9.2.1/Intel TBB on Ubuntu 19.10. The sequential policy is OK, but compilation fails with par
since the lambda is expected to accept double
as the second argument. The issue persists on icc 19.0.1.
My question is if the code is valid or if this issue is simply because of the early development stage of the parallel implementation?
#include <numeric>
#include <algorithm>
#include <execution>
#include <vector>
struct Data {
double radius;
};
int main() {
double sum;
std::vector<double> v1;
std::vector<Data> v2;
// ok
sum = std::reduce(std::execution::par, v1.begin(), v1.end(), 0.0, [](double sum, auto i) { return sum + i; });
// ok
sum = std::reduce(std::execution::seq, v2.begin(), v2.end(), 0.0, [](double sum, const Data &i) { return sum + i.radius; });
// compile error
sum = std::reduce(std::execution::par, v2.begin(), v2.end(), 0.0, [](double sum, const Data &i) { return sum + i.radius; });
}
Upvotes: 1
Views: 139
Reputation: 20969
BinaryOp for std::reduce
should be commutative, below both operations should be supported:
double + Data // your lambda supports only this
Data + double // this can be performed only by adding some conversions
If you want to make conversion double->Data
you should add proper constructor. For conversion Data->double
you should add proper conversion operator:
struct Data {
double radius;
// double -> Data
Data (double d) : radius(d) {}
// Data -> double
operator double() const {
return radius;
}
};
Upvotes: 2