Reputation: 1437
I have the following problem
Under what circumstances can multiplication be regarded as a unit time operation?
But I thought multiplication is always considered to be taking unit time. Was I wrong?
Upvotes: 3
Views: 1679
Reputation:
The expression unit time is a little ambiguous (and AFAIK not much used).
True unit time is achieved when the multiply is performed in a single clock cycle. This rarely occurs on modern processors.
If the execution time of the multiply does not depend on the particular values of the operands, we can say that it is performed in constant time.
When the operand length is bounded, so that the time never exceeds a given duration, we also say that an operation is performed in constant time.
This constant duration can be used as the timing unit of the running time, so that you count in "multiplies" instead of seconds (ops, flops).
Lastly, you can evaluate the performance of an algorithm in terms of the number of multiplies it performs, independently of the time they take.
Upvotes: 1
Reputation: 19168
Only in those cases where you're performing operations on two numbers,of numeric type
(emphasis here ,not going into the binary detail), you simply need to assume that the operation being performed is of constant time only.
It's not defined as unit time,but,more strictly, a constant time interval which doesn't change even if we increase the size of number,but, in reality the calculation does utilise subtle more time to perform calculation on large numbers). These are generally considered trivial,unless the numbers being multiplied are too large,like BigIntegers in Java
,etc.
But,as soon as we move towards performing multiplication of binary strings, our complexity increases and naive method yields a complexity of O(n^2).
So,to aimplify, we perform a divide and conquer-based multiplication, also known as Karatsuba's algorithm for multiplication, which has a complexity of O(n^1.59) which reduces the total number of multiplications and additions to some lesser number of multiplications and some of the additions.
I hope I haven't misjudged the question. If so,please alert me so that I can remove this answer. If I understood the question properly, then the other answer posted here seems incomplete.
Upvotes: 5
Reputation: 1851
It depends on what N
is. If N
is the number of bits in an arbitrarily large number, then as the number of bits increases, it takes longer to compute the product. However, in most programming languages and applications, the size of numbers are capped to some reasonable number of bits (usually 32 or 64). In hardware, these numbers are multiplied in one step that does not depend on the size of the number.
When the number of bits is a fixed number, like 32, then it doesn't make sense to talk about asymptotic complexity, and you can treat multiplication like an O(1)
operation in terms of whatever algorithm you're looking at. When can become arbitrarily large, like with Java's BigInteger
class, then multiplication depends on the size of those numbers, as does the memory required to store them.
Upvotes: 8