Reputation: 381
Can anyone tell me how the time taken by differs in GMP if I use operands of different size. For example : the below code
#include <stdio.h>
#include <gmp.h>
#include <stdlib.h>
#include <time.h>
#define REPEAT 10000
void full_mult(mpz_t r,mpz_t a,mpz_t b)
{
mpz_t temp;
mpz_init(temp);
mpz_mul(r,a,b);
mpz_add(temp,a,b);
mpz_sub(a,a,b);
mpz_mul(temp,temp,a);
/*the above code 10 more times*/
}
void half_mult(mpz_t r,mpz_t a,mpz_t b)
{
mpz_t temp;
mpz_init(temp);
mpz_mul(r,a,b);
mpz_add(temp,a,b);
mpz_sub(a,a,b);
mpz_mul(temp,temp,a);
/*the above code then more times*/
}
void main()
{
long int i;
clock_t start, end;
double cpu_time_used;
gmp_randstate_t state;
gmp_randinit_mt(state);
mpz_t a[REPEAT];
mpz_t b[REPEAT];
mpz_t a1[REPEAT];
mpz_t b1[REPEAT];
mpz_t r[REPEAT];
mpz_t r1[REPEAT];
for(i=0;i<REPEAT;i++)
{
mpz_init(a[i]);mpz_init(b[i]);
mpz_init(a1[i]);mpz_init(b1[i]);
mpz_init(r[i]);mpz_init(r1[i]);
}
for(i=0;i<REPEAT;i++)
{
mpz_urandomb(a[i],state,128);
mpz_urandomb(b[i],state,128);
}
start=clock();
for(i=0;i<REPEAT;i++)
half_mult(r[i],a[i],b[i]);
end=clock();
printf( "Number of seconds: %f\n", (end-start)/(double)CLOCKS_PER_SEC );
for(i=0;i<REPEAT;i++)
{
mpz_urandomb(a1[i],state,256);
mpz_urandomb(b1[i],state,256);
}
start=clock();
for(i=0;i<REPEAT;i++)
full_mult(r1[i],a1[i],b1[i]);
end=clock();
printf( "Number of seconds: %f\n", (end-start)/(double)CLOCKS_PER_SEC );
}
As you can see I am trying to measure the timing while operating with two types of integers. One with 256 bits and another with 128 bits. But I did not get any conclusive results from this code. Sometimes time for the operations for 128 bits are higher sometimes time for the operations for 256 bits are higher.
Upvotes: 1
Views: 316
Reputation: 7825
Try a longer sample. I've changed REPEAT to 10000000 and those
mpz_t a[REPEAT];
mpz_t b[REPEAT];
mpz_t a1[REPEAT];
mpz_t b1[REPEAT];
mpz_t r[REPEAT];
mpz_t r1[REPEAT];
to
static mpz_t a[REPEAT];
static mpz_t b[REPEAT];
static mpz_t a1[REPEAT];
static mpz_t b1[REPEAT];
static mpz_t r[REPEAT];
static mpz_t r1[REPEAT];
So after 3 runs I got:
$ gcc -O2 gmp_bench.c -lgmp
$ time ./a.out
Number of seconds: 12.689352
Number of seconds: 18.295134
./a.out 34.54s user 1.27s system 99% cpu 35.820 total
$ time ./a.out
Number of seconds: 12.647052
Number of seconds: 17.918326
./a.out 34.08s user 1.35s system 99% cpu 35.426 total
$ time ./a.out
Number of seconds: 12.647854
Number of seconds: 18.106714
./a.out 34.29s user 1.28s system 99% cpu 35.581 total
$
By monitoring the execution I've noticed the memory allocated is constantly increasing so allocation overhead may be a higher performance toll than the algorithm itself.
Upvotes: 0
Reputation: 6667
According to GMP documentation 15.1 section, the library uses different multiplication algorithms for different sizes of operands. Look at thresholds table:
| Algorithm | Threshold |
|-----------|----------------------|
| Basecase | (none) |
| Karatsuba | MUL_TOOM22_THRESHOLD |
| Toom-3 | MUL_TOOM33_THRESHOLD |
| Toom-4 | MUL_TOOM44_THRESHOLD |
| Toom-6.5 | MUL_TOOM6H_THRESHOLD |
| Toom-8.5 | MUL_TOOM8H_THRESHOLD |
| FFT | MUL_FFT_THRESHOLD |
So, since the algorithms are different timings might be different too.
Upvotes: 1