Reputation: 1281
I'm working on a problem, and here's a piece of code that's working significantly worse than the theoretical expectation:
inline bool check(const int index, const float* f) const{
for(int j=0; j<d; j++)
if(points[index][j] != f[j])
return false;
return true;
}
bool func(int p_cap, const float* f) const{
int i = (p_cap - error > 0? p_cap - error: 0);
int l = (p_cap + error + 1 < keyCount? p_cap + error + 1: keyCount);
for(; i<l; i++)
if(check(i, f))
return true;
return false;
}
These functions are inside a class, with both points
and keyCount
being members of the class. points
is a keyCount*d
2-dimensional array. d
is a global variable. Any help on how can I possibly optimise this code block? Thanks...
EDIT:
I'm using interpolation for searching in the array, and my data is such that it is possible. The expectation is that it will be faster to search any point using this technique than linear search. The reason I have this expectation is because error = 1
. This means, for any point, I'm looking at a maximum of 3
different points in the points
array. I have a million such arrays. Each storing exactly 10 points. The expected value for linear search would be N/2
, which means I'll need to look through 5
before I hit the query point (again, expected/average value). Thus, this "interpolation code" should run faster than the linear search, but that's not happening.
Compilation command used:
g++ -O3 -march=native -DNDEBUG test.cpp -o test
Upvotes: 1
Views: 91
Reputation: 36
I'm not sure if the compiler already does this, but you can do 2 things:
You would need to check the compiled code to see if it already applies these optimizations.
Upvotes: 2