cgyed
cgyed

Reputation: 1

nanosleep() wait for at least 15 ms?

In order to make a good timing in an emulator, I have to wait for around 50 microseconds. When I use nanosleep, for example with a duration of 10 microsecs, I measure a delay of 15 millisec ! I use mingw on windows with the following example code :

#include <stddef.h>
#include <time.h>
#include <stdio.h>

int main()
{
    struct timespec startx, endx;
    struct timespec req={0};
    
    req.tv_sec=0;
    req.tv_nsec=10000; // 10_000 ns = 10 microseconds
    
    for (int i=0; i<10; i++)
    {
        clock_gettime(CLOCK_MONOTONIC, &startx);
        nanosleep(&req, NULL);
        clock_gettime(CLOCK_MONOTONIC, &endx);
        
        if (endx.tv_sec-startx.tv_sec)
            printf("%li microsecs\n",((endx.tv_nsec-startx.tv_nsec)/1000)+1000000);
        else
            printf("%li microsecs\n",(endx.tv_nsec-startx.tv_nsec)/1000);
    }
    
    return 0 ;
}

And the result is :

12528 microsecs
19495 microsecs
14890 microsecs
14229 microsecs
14657 microsecs
14824 microsecs
14724 microsecs
21074 microsecs
13697 microsecs
13893 microsecs

I guess I'm wrong somewhere.... If someone has an idea. I could also avoid the problem with a "do nop while (end-start) < 50usec"...

Thanks.

Upvotes: 0

Views: 1657

Answers (1)

user3629249
user3629249

Reputation: 16540

When I run the posted code through a compiler (gcc) the compiler outputs the following:

gcc -ggdb3 -Wall -Wextra -Wconversion -pedantic -std=gnu11 -c "untitled1.c" -o "untitled1.o" 

untitled1.c: In function ‘main’:

untitled1.c:16:57: error: ‘err’ undeclared (first use in this function)
   16 |         if (nanosleep(&req, NULL)) printf("err : %i\n", err);
      |                                                         ^~~
untitled1.c:16:57: note: each undeclared identifier is reported only once for each function it appears in

untitled1.c:18:18: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘__syscall_slong_t’ {aka ‘long int’} [-Wformat=]
   18 |         printf("%d microsecs %d\n",(endx.tv_nsec-startx.tv_nsec)/1000, endx.tv_sec-startx.tv_sec);
      |                 ~^                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                  |                                              |
      |                  int                                            __syscall_slong_t {aka long int}
      |                 %ld

untitled1.c:18:31: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘__time_t’ {aka ‘long int’} [-Wformat=]
   18 |         printf("%d microsecs %d\n",(endx.tv_nsec-startx.tv_nsec)/1000, endx.tv_sec-startx.tv_sec);
      |                              ~^                                        ~~~~~~~~~~~~~~~~~~~~~~~~~
      |                               |                                                   |
      |                               int                                                 __time_t {aka long int}
      |                              %ld
Compilation failed.

When compiling, always enable the warnings, then fix those warnings.

to start, suggest casting the result of the call to time() to be a long int

as to your question:

context switches take time AND the cpu(s) are probably busy with other tasks AND the task scheduler only looks at the list of tasks ready to run periodically , so your request will not be immediately serviced.

to obtain a more accurate timing, do not depend on the OS. Rather use one of the hardware timers to produce a interrupt event when you want the elapsed time to expire. Then use an interrupt handler to handle the interrupt event from the hardware timer.

Upvotes: 1

Related Questions