Duane Royed Dsilva
Duane Royed Dsilva

Reputation: 101

If sizeof(long) is 4 bytes on my machine, can I infer that sizeof(long long) will be 8 bytes?

printf("Size of long int: %zu\n" , sizeof(long));
printf("Size of long long int: %zu\n" , sizeof(long long));

When run on my 64bit machine the output is:

Size of long int: 8
Size of long long int: 8

Upvotes: 2

Views: 269

Answers (1)

Marco Bonelli
Marco Bonelli

Reputation: 69367

It depends.

Standard-wise, in general you cannot imply anything about the size of a type from the size of another type, if not a lower bound (depending on the value of CHAR_BIT). See below for a more detailed answer.

Implementation-wise, if the compiler is standard-compliant and the value of CHAR_BIT is 8 (which is the most common thing to have), then sizeof long would need to be at least 4, and sizeof long long would need to be at least 8. Leaving out esoteric architectures, in which strange padding bits appear (like for example the Itanium processor with its "Not a Thing" reserved bit), it makes sense to assume sizeof long long to be 8 under those circumstances.

In the case of GCC for Intel x86 and ARM for example, AFAIK this assumption holds true, so the answer to your question would be "yes". However, I would strongly advise to not make any design choice based on such a simple assumption that can be easily tested at compile time, like this:


Standard-wise, more precisely: the C standard does not dictate any rule as to what the size of an integer type should exactly be. It only defines a minimum range of values that the type should be capable of representing. For C99 those values are specified in ISO/IEC 9088:1999 §5.2.4.2 "Numerical limits" (page 21 here).

The only implicit requirement on the size of signed integer types is in §6.2.6.2 "Integer types":

  1. For unsigned integer types other than unsigned char, the bits of the object representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter). If there are N value bits, each bit shall represent a different power of 2 between 1 and 2^(N−1), so that objects of that type shall be capable of representing values from 0 to 2^(N−1) using a pure binary representation; this shall be known as the value representation. The values of any padding bits are unspecified.

  2. For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M ≤ N).

The first point implicitly poses a lower bound for the number of bits of an integer type: since the standard defines the minimum ranges of values those types should be capable of representing, and also states that there should be exactly N bits for such value, we know that each integer type should be at least N bits of size to be able to represent values from 0 to 2^(N-1).

However, given the presence of "padding bits", there is no guarantee on an upper bound for the size of integer types. Therefore, in general, not only there is no guarantee that sizeof long == 4 implies sizeof long long == 8, but there also is no guarantee that sizeof long long >= sizeof long in the first place. This also applies to other integer types.

For what concerns the standard, a compliant implementation could even have:

  • CHAR_BIT == 8
  • sizeof int == 10, with 31 bits for value, 1 for sign and 47 for padding
  • sizeof long == 8, with 63 bits for value, 1 for sign and 0 for padding.

Upvotes: 3

Related Questions