Reputation: 19
Since time complexity is a function of the size of the input, Assume we have input as an integer number, now since memory allocated to every integer is fixed(4 bytes), should not the time complexity be constant and not matter whether the input integer is 1 or 234345.
I know the input size is a number of bits used to represent the input(which is logn(base 2)) but then why do we say that size allocated to an integer data type is 4 bytes(or 2 bytes). For example: since decimal number 100 in binary can be represented as 01100100 which is log100 bits why do we allocate 32 bits and waste the other 24 bits?
Upvotes: 1
Views: 2087
Reputation: 28279
When talking about complexity, we usually assume the input is an array of numbers which have constant size. Then the size of the input is proportional to the number of elements in the array.
However, when this is not the case, you have to agree on what you mean by "input size". There are several possibilities, the number of bits being an important one. Another definition which is used sometimes, is the sum of the input values (or the only input value). Also called "unary representation".
If you have one input, and it has constant size, complexity is meaningless. In this case, it could be either 8 or 32 bits — in any case, there is no complexity here at all.
Upvotes: 1