ajishalfred
ajishalfred

Reputation: 277

how to generate a delay

i'm new to kernel programming and i'm trying to understand some basics of OS. I am trying to generate a delay using a technique which i've implemented successfully in a 20Mhz microcontroller. I know this is a totally different environment as i'm using linux centOS in my 2 GHz Core 2 duo processor. I've tried the following code but i'm not getting a delay.

#include<linux/kernel.h>
#include<linux/module.h>

int init_module (void)
{
        unsigned long int i, j, k, l;

        for (l = 0; l < 100; l ++)
        {
                for (i = 0; i < 10000; i ++)
                {
                        for ( j = 0; j < 10000; j ++)
                        {
                                for ( k = 0; k < 10000; k ++);
                        }
                }
        }

        printk ("\nhello\n");

        return 0;
}

void cleanup_module (void)
{
        printk ("bye");
}

When i dmesg after inserting the module as quickly as possile for me, the string "hello" is already there. If my calculation is right, the above code should give me atleast 10 seconds delay. Why is it not working? Is there anything related to threading? How could a 20 Ghz processor execute the above code instantly without any noticable delay?

Upvotes: 1

Views: 822

Answers (2)

Hasturkun
Hasturkun

Reputation: 36402

The compiler is optimizing your loop away since it has no side effects.

To actually get a 10 second (non-busy) delay, you can do something like this:

#include <linux/sched.h>
//...

unsigned long to = jiffies + (10 * HZ); /* current time + 10 seconds */

while (time_before(jiffies, to))
{
    schedule();
}

or better yet:

#include <linux/delay.h>
//...

msleep(10 * 1000);

for short delays you may use mdelay, ndelay and udelay

I suggest you read Linux Device Drivers 3rd edition chapter 7.3, which deals with delays for more information

Upvotes: 2

CLo
CLo

Reputation: 3730

To answer the question directly, it's likely your compiler seeing that these loops don't do anything and "optimizing" them away.

As for this technique, what it looks like you're trying to do is use all of the processor to create a delay. While this may work, an OS should be designed to maximize processor time. This will just waste it.

I understand it's experimental, but just the heads up.

Upvotes: 1

Related Questions