Reputation: 1337
I was wondering if the length of a double variable causes an impact on the multiplication time. For testing purposes, I wrote the folowing small piece of code:
#include <iostream>
#include <time.h>
int main() {
double x = 123.456;
double a = 1.23456;
double b = 1.234;
double d = 1.0;
// experiment 1
clock_t start = clock();
for( unsigned long long int i = 0; i < 10000000; ++i ) {
x *= a;
}
clock_t end = clock();
std::cout << "123.456*1.23456 takes " << (double)(end-start)/CLOCKS_PER_SEC << " secs" << std::endl;
// experiment 2
start = clock();
for( unsigned long long int i = 0; i < 10000000; ++i ) {
x *= b;
}
end = clock();
std::cout << "123.456*1.234 takes " << (double)(end-start)/CLOCKS_PER_SEC << " secs" << std::endl;
// experiment 3
start = clock();
for( unsigned long long int i = 0; i < 10000000; ++i ) {
x *= d;
}
end = clock();
std::cout << "123.456*1.0 takes " << (double)(end-start)/CLOCKS_PER_SEC << " secs" << std::endl;
return 0;
}
I compiled it using VS2008, 64bit in release mode, without optimization and debug information. The result was not surprising: all three kinds of multiplication last exactly the same time, with difference in just a few milliseconds. My question is: why is that so? If I make a mistake and multiply a number by 1.0 instead of 1, and I do not use any compiler optimization, then my multiplication will last much longer than multiplying a number by 1! When humans multiply, then the shorter the number, the faster we come to the result. How does computer multiply so that it does not matter, how long the two numbers are?
Apart from that, I decided to check whether debugging influences runtime speed. In this case, it does not: compiling with \DEBUG
option or without, multiplication always takes exactly the same amount of time.
With optimization enabled \O2
, the same multiplication lasts only a thousandth of a second. What does optimization do in this case? How can such a compact code of multiplying two doubles in C++ be optimized?
I would be grateful for any explanation, what happens during double multiplication in C++.
Upvotes: 2
Views: 1085
Reputation: 30035
When humans multiply, then the shorter the number, the faster we come to the result. How does computer multiply so that it does not matter, how long the two numbers are?
Computers used to work the same way that humans did, calculate one digit at a time, and adding up the partial results to get the answer. However the amount of hardware that can be packed into a single chip has advanced to the point where it's possible to have a circuit dedicated to each digit so it calculates each digit at the same time in parallel. Of course it's all in binary, but the principle is the same.
Upvotes: 1
Reputation: 21086
Floats (single precision) are 32bits and Doubles are 64bits.
http://en.wikipedia.org/wiki/IEEE_754-2008
On an Intel/AMD processor... The FPU (x87) or SIMDD (SSEx) will calculate the MULtiplication in a constant amount of cycles. The speed is based on throughput, underlying operations, and latency.
http://www.agner.org/optimize/instruction_tables.pdf
Upvotes: 2
Reputation: 96266
With optimization enabled \O2, the same multiplication lasts only a thousandth of a second. What does optimization do in this case?
Since you never use your result (x
) it's completely valid to eliminate all the multiplications. Try displaying the results.
Also note, that you're doing 10M multiplications, with a modern processor you have at least 1G of clock cycles per second, and in this case it executes very tight loop.
Upvotes: 1
Reputation: 8180
The variables have always the same length, but the values differ. With other words: the operations to perform at hardware level are the very same, thus resulting. E.g., an integer, multiplying by 0 (zero out result, i.e. transfer 0s into destination register) takes the same time as multiplying by 1 (copy operand into destination register).
Upvotes: 2