Reputation: 2533
I have an algorithmic test about implementing of binary search that works in a maximum amount of time of 2 seconds.
First, I've implemented a recursive version of binary search, but it was taking almost 3.6 seconds to finish in some test cases. Then, I changed it to an iterative version, but it takes 2.6 seconds in the same test case. However, I think that using a while loop
is a reason why it takes a lot of time.
My question is: What do I need to improve to make it take a maximum 2 seconds?
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int iterBinarySearch(vector<int> A, int low, int high, int key) {
int mid;
while (low <= high) {
mid = low + ((high - low)/2);
if (key < A[mid]) {
high = mid -1;
} else if (key > A[mid]) {
low = mid +1;
} else {
return mid;
}
}
return -1;
}
int main() {
vector<int>dict;
vector<int>keys;
int dictSize;
cin >> dictSize;
while (dictSize--) {
int val;
cin >> val;
dict.push_back(val);
}
int keysSize;
cin >> keysSize;
while (keysSize--) {
int val;
cin >> val;
keys.push_back(val);
}
sort(dict.begin(), dict.end());
int size = (int)dict.size() -1;
for(int i = 0; i< keys.size(); ++i) {
if ((dict[0] > keys[i]) || (dict[size] < keys[i])) {
cout << "-1" << ' ';
} else {
int res = iterBinarySearch(dict, 0, size, keys[i]);
cout << res << ' ';
}
}
return 0;
}
Upvotes: 0
Views: 475
Reputation: 25536
Passing the vector as const reference as mentioned already is a major point, using reserve
another one. Not allocating the keys at all can give you some further performance, too:
sort(dict.begin(), dict.end());
int keysSize;
cin >> keysSize;
// this is a constant loop constraint, so move it out, too...
int size = (int)dict.size() - 1;
while (keysSize--)
{
int val;
cin >> val;
if (val < dict[0] || val > dict[size])
{
cout << "-1" << ' ';
}
else
{
int res = iterBinarySearch(dict, 0, size, keys[i]);
cout << res << ' ';
}
}
return 0;
You can safe one additional function call:
cout << "-1 ";
Sure, won't give you too much, but that's so simple so I mention it anyway...
Just a side note: When dealing with values that cannot get negative by their nature (sizes, indices to arrays, etc) I would prefer the unsigned counter parts of the signed data types (unsigned int
in your case). This will not have any impact on performance at all, as with modern two's complement architectures, exactly the same operations will be used (apart from some comparisons perhaps), just shows more clearly the intent and (part of) the valid range of the variable right away from the data type (one exception to mention: imagine you need int64_t for signed, but could do with uint32_t, and you have a 32-bit architecture, e. g. a micro controller – then you really get some minimal performance gain...).
Upvotes: 1
Reputation: 67822
Only two things are obviously wasteful:
int iterBinarySearch(vector<int> A, int low, int high, int key)
copies the vector (which could have 100,000 elements from your comment), whereas
int iterBinarySearch(const vector<int> &A, int low, int high, int key)
(or any of the other const-ref spellings) will search your original vector directly, with no copying
your initial push_back
to the dict and key vectors is wasteful when you know the size in advance: because you didn't tell the vector how large it's going to be, it has to keep resizing and copying. Just add
cin >> dictSize;
dict.reserve(dictSize); // grow to the correct size just once
while (dictSize--) {
int val;
cin >> val;
dict.push_back(val);
}
and the same for the keys.
Now, apart from these two things that jump out, you should ideally try to profile your code rather than just guessing where the slowness is.
Upvotes: 3
Reputation: 2908
1. The main problem is when you pass dict argument as value.
Just pass it as const reference.
int iterBinarySearch(const vector<int> &A, int low, int high, int key) {
// your code
}
2. Also try to change this line
mid = low + ((high - low)/2);
to
mid = (low + high)/2;
NOTE: Make second change only if your vector size is not bigger than INT_MAX / 2.
Upvotes: 2