Yang Song
Yang Song

Reputation: 11

how a cpu bound process switches between kernel and user mode

Recently I'm quite confused abount how linux kernel handle signal.

I know kernel call user registered signal handler when a process returns from kernel(ready to run in user mode). The most common case is syscall or kernel schedule task. So I write a program which is cpu bound without much syscall. And I run the program on a machine which has quite low load.

#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>

#define USECREQ 10
#define LOOPS 1000

void event_handler(int signum)
{
    static unsigned long cnt = 0;
    static struct timeval tsFirst;
    if (cnt == 0)
    {
        gettimeofday(&tsFirst, 0);
    }
    cnt++;
    if (cnt >= LOOPS)
    {
        struct timeval tsNow;
        struct timeval diff;
        setitimer(ITIMER_REAL, NULL, NULL);
        gettimeofday(&tsNow, 0);
        timersub(&tsNow, &tsFirst, &diff);
        unsigned long long udiff = (diff.tv_sec * 1000000) + diff.tv_usec;
        double delta = (double)(udiff / cnt) / 1000000;
        int hz = (unsigned)(1.0 / delta);
        printf("kernel timer interrupt frequency is approx. %d Hz", hz);
        if (hz >= (int)(1.0 / ((double)(USECREQ) / 1000000)))
        {
            printf(" or higher");
        }
        printf("\n");
        exit(0);
    }
}

int main(int argc, char **argv)
{
    struct sigaction sa;
    struct itimerval timer;

    memset(&sa, 0, sizeof(sa));
    sa.sa_handler = &event_handler;
    sigaction(SIGALRM, &sa, NULL);
    timer.it_value.tv_sec = 0;
    timer.it_value.tv_usec = USECREQ;
    timer.it_interval.tv_sec = 0;
    timer.it_interval.tv_usec = USECREQ;
    setitimer(ITIMER_REAL, &timer, NULL);
    while (1) ;
}

since the machine has quite low load(24 core, no other tasks), I think there is no much schedule events.

I think the only kernel to/from user mode switch is clock interrupt, which is approximately 4ms or 10ms, which means the kernel won't call event_handler more than 1s/4ms = 250 times within 1 second. But surprisingly, the output is about 100000 and it takes only about . Does this mean there is about 111111 kernel schedule events?

// output
[root@my-centos ~]# time ./a.out
kernel timer interrupt frequency is approx. 111111 Hz or higher

real    0m0.011s
user    0m0.009s
sys     0m0.001s

what exactly is the time kernel deliver signal(not generate) to process?

Upvotes: 0

Views: 109

Answers (1)

MankPan
MankPan

Reputation: 124

Firstly, This line is wrong, because you are typecasting to double after the division operation (which will be in int)

double delta = (double)(udiff / cnt) / 1000000;

It should be:

double delta = ((double)udiff / cnt) / 1000000;

I am pretty sure the value of variable 'delta' would be very near to USECREQ/1000000, which means approx. 0.000010 seconds. BUT this line is also wrong. By dividing 'delta' by cnt you are holding AVG. TIME (in second) of all SIGALRM received by the proess, but you needed accumulated time.

To know how many interrupts were delivered in a second line should be:

double delta = (double)udiff / 1000000;

It will be around 100 HZ.

Upvotes: 0

Related Questions