Reputation: 11986
In section 7.23.2.2 paragraph 2 and 3 of the C99 standard:
2 The
difftime
function computes the difference between two calendar times:time1 - time0
.3 The
difftime
function returns the difference expressed in seconds as adouble
.
The C99 standard does appear to define a minimum granularity for the difftime
function only a resolution (seconds). Are there any implicit guarantees about the granularity of the difftime
function?
I can reason that the minimum possible granularity on any given implementation depends on the granularity of the underlying representation of that implementation's time_t
data type (since that is what is passed to difftime
) -- but from my understanding of the text, difftime
isn't required to return the difference at that minimum granularity, and could in fact return at a much larger one.
Upvotes: 2
Views: 630
Reputation: 16243
There are no such guarantees made by the C standard. time_t
has an arithmetic type, and is able to represent the implementation's best approximation of a calendar time (with implementation defined range and precision). difftime
calculates the difference between two such time_t
values, and returns the result in seconds. That's about all that the standard guarantees. Specifically, it does not say how granular/precise the value returned by difftime
is (but it can be reasonably assumed to have the same granularity as the time_t
type on the same implementation).
The POSIX standard gives one further guarantee, by specifying time_t
to hold the seconds since the epoch (either as an integer or floating point type) - ie. it fixes the unit. Again, it does not say anything about the granularity though.
The main reason I can think of that neither of these standards specify the granularity, is that that is hardware dependent, and both standards specify the behavior of software (potentially running on a wide variety of hardware). They don't control the system clock, so can't make any assumptions about its granularity.
Upvotes: 3