Reputation: 15
It seems to be a searching algorithm based off of Mergesort. It is to be used on a sorted array of numbers. Is the Big-O complexity still O(n log n)?
public static boolean fastSearch(int[] data, int min, int max, int target)
{
int N = data.length;
boolean found = false;
int midpoint = (min+max)/2; // determine the midpoint
if (data[midpoint] == target) {
found = true;
} else {
if (data[midpoint] > target) {
if (min <= midpoint - 1) {
// Recursion on the left half of the array
found = fastSearch(data, min, midpoint-1, target);
}
} else {
if (midpoint + 1 <= max) {
// Recursion on the right half of the array
found = fastSearch(data, midpoint+1, max, target);
}
}
}
return found;
}
This is my own analysis I did, I just want to confirm if I'm correct or not:
Each pass through the data doubles the size of the subsections. These need to be repeatedly doubled until it finds the target. It takes log(n) doublings, and each pass of the data is proportional to the number of items. So, it is O(n log n).
Upvotes: 0
Views: 215
Reputation: 27535
This looks like a normal binary search algorithm to me, which is known to have O(log n)
complexity (except it returns whether value was found, not its position). You basically "visits" at most log n
of entries of array:
n/2/2/2/2/.../2 = 1
- the amount of /2
would be log n
.It's not a strict analysis. There are likely to be some off-by-one errors as well as no boundary condition checking, but the final result would surely be O(log n)
.
Of course, we have to assume that data
array is sorted and n = max - min
(and min < max
). Otherwise that would be garbage.
BTW: f(n) = O(log n)
means that log n
(from some point) is kind of upper limit of f(n)
(with possibly some positive constant factor). f(n) = O(n log n)
would mean that same for n log n
. So yeah, if its limited by log n
it surely is limited by n log n
(since f(n) <= c1(log n) <= c2(n log n)
for all n greather that some N) but that's not the answer you are looking for.
Upvotes: 5