Ariel
Ariel

Reputation: 99

Why isn't every algorithm O(1)?

If we have a random array of integers with size n such that we need to calculate the summation.

Claim: The best algorithm does that in O(n)

But, I claim we can do this in O(1). why?

We know for sure that n is locked in some field (since it's int and int is finite) which means I can sum all elements in less than 2,147,483,647 steps?

Upvotes: 8

Views: 678

Answers (5)

dreamcrash
dreamcrash

Reputation: 51483

Why isn't every algorithm O(1)?

TL;DR: Because the Big O notation is used to quantify an algorithm, with regards of how it behaves with an increment of its input.

Informally, you can think about it as a framework that humans have invented to quantify classes of algorithms. If such framework would yield for every algorithm O(1), it would have defeated its own purpose in the first place i.e, quantify classes of algorithms.

More detailed answer

Let us start by clarifying what is Big O notation in the current context. From (source) one can read:

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. (..) In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

The following statement is not accurate:

But, I claim we can do this in O(1). why? we know for sure that n is locked in some field (since it's int and int is finite) which means I can sum all elements in less than 2,147,483,647 steps?

One cannot simply perform "O(2,147,483,647)" and claim that "O(1)" since the Big O notation does not represent a function but rather a set of functions with a certain asymptotic upper-bound; as one can read from source:

Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.

Informally, in computer-science time-complexity and space-complexity theories, one can think of the Big O notation as a categorization of algorithms with a certain worst-case scenario concerning time and space, respectively. For instance, O(n):

An algorithm is said to take linear time/space, or O(n) time/space, if its time/space complexity is O(n). Informally, this means that the running time/space increases at most linearly with the size of the input (source).

So the complexity is O(n) because with an increment of the input the complexity grows linear and not constant.

Upvotes: 8

lopho
lopho

Reputation: 177

Integers are countable infinite by definition. As such you can not proof termination on basis of n. If you redefine integers as an bounded interval of countable numbers you can claim O(1) if and only if n is such an integer literal.

IMO: The useful part of O-notation is the information about time complexity in relation to input. In a scenario where my input is bounded I just focus on the behavior within the bounds. And that is O(n) in this case. You can claim O(1) but this strips it of information.

Upvotes: 1

eerorika
eerorika

Reputation: 238401

Why isn't every algorithm O(1)?

Because we choose to ignore the limitations (i.e. assume resources to be unlimited) of concrete computers when we analyse complexity of algorithms.

Asymptotic complexity gives us useful information about how the complexity grows. Merely concluding that there is a constant limit because of the hardware and ignoring how the limit is reached doesn't give us valuable information.


Besides, the limit that you perceive is much, much higher in reality. Computers aren't limited to representing at most 2'147'483'647 integer values. Using complex data-structures, computers can represent arbitrarily large numers - until memory runs out... but memory can be streamed from the disk. And there are data centers that easily provide hundreds of Tera Bytes of storage.

Although, to be fair: If we allow arbitrary length integers, then the complexity of the sum is worse than linear because even a single addition has linear complexity.


Once we take concrete hardware into consideration, it makes more sense to choose to use concrete units in the analysis: How many seconds does the program take for some concrete input on this particular hardware? And the way to find the answer is not math, but concrete measurement.

Upvotes: 15

user1196549
user1196549

Reputation:

A theory where all algorithms have a complexity O(1) would be of little use, acknowledge that.

In the theory of complexity, N is an unbounded variable so that you do obtain a non-trivial asymptotic bound.

For practical algorithms, the asymptotic complexity is useful in that is does model the exact complexity for moderate values of N (well below the largest int).


In some pathological cases (such as the most sophisticated matrix multiplication algorithms), N must exceed the capacity of an int before the asymptotic behavior becomes beneficial.


Side remark:

I recall of a paper claiming O(1) time for a certain operation. That operation was involving an histogram of pixel values, the range of which is indeed constant. But this was a misleading trick, because the hidden constant was proportional to 256 for an 8 bits image, and 65536 for 16 bits, making the algorithm dead slow. Claiming O(H) where H is the number of bins would have been more informative and more honest.

Upvotes: 5

463035818_is_not_an_ai
463035818_is_not_an_ai

Reputation: 122820

You chooses what is N and in what runtime dependency you are interested. You can say:

"Well computers have limited ressources, so I simply consider those limits as constant factors and now all my algorithms are O(1)"

However, this view is of limited use, because we want to use complexity to classify algorithms to see which performs better and which worse, and a classification that puts all algorithms in the same bucket does not help with that.

Upvotes: 2

Related Questions