Reputation: 7844
I have the below pseudocode that takes a given unsorted array of length size
and finds the range by finding the max and min values in the array. I'm just learning about the various time efficiency methods, but I think the below code is Θ(n)
, as a longer array adds a fixed number of actions (3).
For example, ignoring the actual assignments to max
and min
(as the unsorted array is arbitrary and these assignments are unknown in advance), an array of length 2 would only require 5 actions total (including the final range calculation). An array of length 4 only uses 9 actions total, again adding the final range calculation. An array of length 12 uses 25 actions.
This all points me to Θ(n)
, as it is a linear relationship. Is this correct?
Pseudocode:
// Traverse each element of the array, storing the max and min values
// Assuming int size exists that is size of array a[]
// Assuming array is a[]
min = a[0];
max = a[0];
for(i = 0; i < size; i++) {
if(min > a[i]) { // If current min is greater than val,
min = a[i]; // replace min with val
}
if(max < a[i]) { // If current max is smaller than val,
max = a[i]; // replace max with val
}
}
range = max – min; // range is largest value minus smallest
Upvotes: 2
Views: 128
Reputation: 5674
Yes, this would be Θ(n). Your reasoning is a little skewed though.
You have to look at every item in your loop so you're bounded above by a linear function. Conversely, you are also bounded below by a linear function (the same one in fact), because you can't avoid looking at every element.
O(n) only requires that you bound above, Omega(n) requires that you bound below. Θ(n) says you're bounded on both sides.
Upvotes: 3
Reputation: 1926
Yeah this surely is O(n)
algorithm. I don't think you really need to drill down to see number of comparisons to arrive on the conclusion about the complexity of the algorithm. Just try to see how the number of comparisons will change with the increasing size of the input. For O(n)
the comparisons should have a linear increase with the increase in input. For O(n^2)
it increases by some multiple of n and so on.
Upvotes: 1
Reputation: 10026
Let size be n
, then it's clear to see that you always have 2n
comparisons and of course the single assignment at the end. So you always have 2n + 1
operations in this algorithm.
In the worst case scenario, you have 2n
assignments, thus 2n + 1 + 2n
= 4n + 1
= O(n)
.
In the best case scenrio, you have 0
assignments, thus 2n + 1 + 0
= 2n + 1
= Ω(n)
.
Therefore, we have that both the best and worst case perform in linear time. Hence, Ɵ(n)
.
Upvotes: 2
Reputation: 688
You're right. It's O(n).
An easy way to tell in simple code (like the one above) is to see how many for() loops are nested, if any. For every "normal" loop (from i = 0 -> n), you add a factor of n. [Edit2]: That is, if you have code like this:
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
for(int j = 0; j < n; ++j){ //Happens n*n times.
//something //Happens n*n times.
}
}
//Overall complexity is O(n^2)
Whereas
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
//something //Happens n times.
}
for(int j = 0; j < n; ++j){ //Happens n times.
//something //Happens n times.
}
//Overall complexity is O(2n) = O(n)
This is pretty rudimentary, but useful if someone has not taken an Algorithm course.
The procedures within your for() loop are irrelevant in a complexity question.
[Edit]: This assumes that size actually means the size of array a.
Upvotes: 3