Reputation: 6464
I came across the following code in the kernel:
/*
* Have the 32 bit jiffies value wrap 5 minutes after boot
* so jiffies wrap bugs show up earlier.
*/
#define INITIAL_JIFFIES ((unsigned long)(unsigned int) (-300*HZ))
static inline u32 cstamp_delta(unsigned long cstamp)
{
return (cstamp - INITIAL_JIFFIES) * 100UL / HZ;
}
where cstamp
value is in jiffies
.
This is the code from net/ipv4/devinet.c
where IP address per interface is implemented (among other things).
I see that INITIAL_JIFFIES
macro takes the value of 5 minutes (300) and converts it in jiffies
(-300*HZ
), and typecasting ensures correct value wrapping.
But why is it explicitly set to negative value (-300*HZ
)?
I'm not sure, in what units cstamp_delta()
returns?
Upvotes: 0
Views: 594
Reputation: 69286
why is it explicitly set to negative value (
-300*HZ
)?
It's not a negative value. The value is casted to unsigned int
, so it's positive, and more precisely exactly UINT_MAX - 300*HZ
, which means that the final value will be 5 minutes before the maximum unsigned 32bit integer value is reached. This is, as the comment states, to detect errors with parts of the code incorrectly handling jiffies
as a 32bit value.
I'm not sure, in what units
cstamp_delta()
returns?
Well, cstamp - INITIAL_JIFFIES
just calculates the total number of ticks from kernel boot time. Dividing this value by HZ
returns the total number of seconds from boot time. Since the value is multiplied by 100 first though, the final result is the total number of hundredths of second elapsed from boot time.
Since u32
is used as return type, the value returned by this function is of course going to wrap around relatively "soon", in about 1 year and 4 months of runtime (2^32/100/60/60/24/356).
Upvotes: 1