Reputation: 3130
I am trying to measure the CPU time of following code - struct timespec time1, time2, temp_time;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time1);
long diff = 0;
for(int y=0; y<n; y++) {
for(int x=0; x<n; x++) {
float v = 0.0f;
for(int i=0; i<n; i++)
v += a[y * n + i] * b[i * n + x];
c[y * n + x] = v;
}
}
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time2);
temp_time.tv_sec = time2.tv_sec - time1.tv_sec;
temp_time.tv_nsec = time2.tv_nsec - time1.tv_nsec;
diff = temp_time.tv_sec * 1000000000 + temp_time.tv_nsec;
printf("finished calculations using CPU in %ld ms \n", (double) diff/1000000);
But the time value is negative when i increase the value of n. Code prints correct value for n = 500 but it prints negative value for n = 700 Any help would be appreciated.
Here is the full code structure -
void run(float A[], float B[], float C[], int nelements){
struct timespec time1, time2, temp_time;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time1);
long diff = 0;
for(int y=0; y<nelements; y++) {
for(int x=0; x<nelements; x++) {
float v = 0.0f;
for(int i=0; i<nelements; i++)
v += A[y * nelements + i] * B[i * nelements + x];
C[y * nelements + x] = v;
}
}
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time2);
temp_time.tv_sec = time2.tv_sec - time1.tv_sec;
temp_time.tv_nsec = time2.tv_nsec - time1.tv_nsec;
diff = temp_time.tv_sec * 1000000000 + temp_time.tv_nsec;
printf("finished calculations using CPU in %ld ms \n"(double) diff/1000000);
}
This function abovr is called from different fil as follows:
SIZE = 500;
a = (float*)malloc(SIZE * SIZE * sizeof(float));
b = (float*)malloc(SIZE * SIZE * sizeof(float));
c = (float*)malloc(SIZE * SIZE * sizeof(float));
//initialize a &b
run(&a[SIZE],&b[SIZE],&c[SIZE],SIZE);
Upvotes: 0
Views: 2121
Reputation: 22318
The 'tv_nsec' field should never exceed 10^9 (1000000000), for obvious reasons:
if (time1.tv_nsec < time2.tv_nsec)
{
int adj = (time2.tv_nsec - time1.tv_nsec) / (1000000000) + 1;
time2.tv_nsec -= (1000000000) * adj;
time2.tv_sec += adj;
}
if (time1.tv_nsec - time2.tv_nsec > (1000000000))
{
int adj = (time1.tv_nsec - time2.tv_nsec) / (1000000000);
time2.tv_nsec += (1000000000) * adj;
time2.tv_sec -= adj;
}
temp_time.tv_sec = time1.tv_sec - time2.tv_sec;
temp_time.tv_nsec = time1.tv_nsec - time2.tv_nsec;
diff = temp_time.tv_sec * (1000000000) + temp_time.tv_nsec;
This code could be simplified, as it makes no assumptions about the sign of the 'tv_sec' field. Most Linux sys headers (and glibc?) provide macros to handle this sort of timespec arithmetic correctly don't they?
Upvotes: 0
Reputation: 104514
Look at your print statement:
printf("finished calculations using CPU in %ld ms \n", (double) diff/1000000);
The second parameter you pass is a double, but you are printing out this floating point value as a long (%ld). I suspect that's half your problem.
This may generate better results:
printf("finished calculations using CPU in %f ms \n", diff/1000000.0);
I also agree with keety, you likely should be using unsigned types Or you could possibly avoid the overflow issues altogether by staying in millisecond units instead of nanoseconds. Here's why I stick with 64-bit unsigned integers and just stay in the millisecond realm.
unsigned long long diffMilliseconds;
diffMilliseconds = (time2.tv_sec * 1000LL + time2.tv_nsec/1000000) - (time1.tv_sec * 1000LL + time1.tv_nsec/1000000);
printf("finished calculations using CPU in %llu ms \n", diffMilliseconds);
Upvotes: 0
Reputation: 8660
One of possible problem causes is that the printf
format is for a long signed integer value (%ld
), but the parameter has the double type. To fix the problem is necessary change %ld
to %lf
in the format string.
Upvotes: 0