user3175173
user3175173

Reputation: 103

Input unit in time complexity for algorithm

In most materials available online given an input "n" you can figure out the big "O" of a notation. However, there are lectures online who looks at the number of "bits" as input. What does it mean to use "bits" as input to find Big-O Omega and theta for algorithms?

Upvotes: 2

Views: 1200

Answers (5)

Łukasz Kidziński
Łukasz Kidziński

Reputation: 1623

You can treat Big-O notation just as a language for expressing complexity. Normally you first think which parameters influence the algorithm, then you estimate this influence and finally you express that in a common language of Big-O.

Bits as the input: In theory, if your integers on the input vary in size significantly, you want to show how does it influence your algorithm.

Number of instances as the input: On the other hand, ignoring the number of bits is actually well justified. If we are dealing with integers for instance, then most operations need up to several cycles of processor and it does not depend on how big this integer is, as long as it fits in 32 or 64-bits. It is less precise and more abstract but, in most cases, this is what you actually want.

It all depends on where does the complexity of an algorithm comes from and what do you want to emphasize.

Example that in some cases it really matters:

Imagine a balanced BST. It is well-known that search takes O(log n). If you have very long integers (k bits) then, comparating bits one by one, you get O(k log n) algorithm. However, using a neat data structure you can try to achieve O(log kn). [by cutting integers into pieces and using something like Trie]

Upvotes: 4

invalid_id
invalid_id

Reputation: 804

Sometimes the input size (on disk) is more limiting than the operations done on this data. Hence, you have to use the size in bits instead of the length of the array for example.

Upvotes: 0

Paul Hankin
Paul Hankin

Reputation: 58339

There's a huge amount of confusion about big-O. Big-O is used to describe mathematical functions. That's all. The definition of the statement "f(n) = O(g(n))" is that there's a k such that for all n large enough, f(n) <= k.g(n).

When you have an algorithm, you can write down functions that describe some properties about it. For example:

  • let f(n) be the worst-case number of comparisons used in a sort algorithm, if you have an input list of length n
  • let f(n) be the number of "basic operations" (which you have to define) to compute the product of two numbers, each represented as an array of bits.

Once you've got these functions, you can say whether they're O(n) or O(n^2) or whatever.

The confusion comes, because people use shorthand to describe complexities. For example, they might say "this algorithm has runtime complexity O(n)". Here it's clear enough that "n" is some measure of the input, but there may not be a unique interpretation: in some contexts it's one of the inputs of the algorithm, in others it may be the size of the input, which may be the number of elements in a list, or the size of the input in bits. (It's also implicit what "runtime" means here). One confusing example are matrix operations, since with an n by n matrix already suggests "n" should be the length of the side of the matrix, rather than the size of the matrix.

There's no simple solution in general: if you don't understand what something means in the context it's used, then you have to either ask the person who wrote it, or figure out what makes the most sense. Just remember that big-O describes functions, not algorithms, and in contexts where it's apparently describing an algorithm, you have to figure out what the function that's actually being described is.

Upvotes: 2

Gari BN
Gari BN

Reputation: 1683

It is a matter of perspective:

You can refer an array of n numbers as n. In this case you usually ignore the elements in the array. It make sense, as usually, even if you sort long numbers they can be bounded by 64 bits for each of them, and this is a constant. However, if the size of the numbers in the array is not a constant, then actions like the comparison or copy of numbers depends on the length of them, and the length of them is the number of the bits (to compare or copy numbers of m bits, it takes O(m)).

I will give you better example (than array of numbers): What is the complexity of calculating and printing the output of the factorial (n!) function?

Most of the people will say O(n) - as there are n multiplications.

However, if you examine this more closely, the complexity should be w(nlogn) (w as omega). Why? This is because to write n!, you need O(nlogn) bits, as n! = O(nlogn).

But if you give it another look, then you see that n was the input to the function, and the length of n (writing n in bits) is logn. So with regarding to the user input, the complexity is indeed O(n).

I hope you got the point: it is only depend on your perspective, but for most of the cases - it does not matter, and because complexity is with regarding to the length of the input, you will get the same result.

See answer of me for related question: https://cs.stackexchange.com/a/19130/12193

Upvotes: 0

amit
amit

Reputation: 178491

In complexity theory, the complexity of an algorithm is calculated as a function of the size of the input.

In many cases, such as arrays sorting, we refer to the number of elements as n. Since we are usually dealing with fixed size integers, this gives us input of size CONST*n. When we later calculate complexity in big O notation, this CONST factor can be ignored, and we refer only to n, since constants are not significant to big O notation.
For example, a sorting solution is:

O(CONST*n*log(CONST*n)) = O(CONST*n*(log(CONST)+log(N)) = 
O(CONST*n*log(n) + CONST*n*log(CONST) = O(CONST*n*log(n)) = O(nlogn)

However, in some cases, the complexity is a function of a number, but the input contain much less than this number of bits.

Have a look on the subset sum problem.
In this problem you are given a set (array) of integer elements, and a number W, and you need to check if there is a subset that sums to W. There is a dynamic programming solution to solve this problem in O(n*W) (n is the number of elements in the set).
However, this solution is considered only pseudo-polynomial, since the input given to represent W is of size log(W), so if you refer to the number of bits given in the input to represent W as x, you get a complexity of O(n*2^x) - which is not polynomial in the size of the input.

Upvotes: 2

Related Questions