Reputation: 2534
When I read someone's code I find that he bothered to write an explicite type cast.
#define ULONG_MAX ((unsigned long int) ~(unsigned long int) 0)
When I write code
1 #include<stdio.h>
2 int main(void)
3 {
4 unsigned long int max;
5 max = ~(unsigned long int)0;
6 printf("%lx",max);
7 return 0;
8 }
it works as well. Is it just a meaningless coding style?
Upvotes: 0
Views: 149
Reputation: 78923
The code you read is very bad, for several reasons.
First of all user code should never define ULONG_MAX
. This is a reserved identifier and must be provided by the compiler implementation.
That definition is not suitable for use in a preprocessor #if
. The _MAX
macros for the basic integer types must be usable there.
(unsigned long)0
is just crap. Everybody should just use 0UL
, unless you know that you have a compiler that is not compliant with all the recent C standards with that respect. (I don't know of any.)
Even ~0UL
should not be used for that value, since unsigned long
may (theoretically) have padding bits. -1UL
is more appropriate, because it doesn't deal with the bit pattern of the value. It uses the guaranteed arithmetic properties of unsigned integer types. -1
will always be the maximum value of an unsigned type. So ~
may only be used in a context where you are absolutely certain that unsigned long
has no padding bits. But as such using it makes no sense. -1
serves better.
"recasting" an expression that is known to be unsigned long
is just superfluous, as you observed. I can't imagine any compiler that bugs on that.
Recasting of expression may make sense when they are used in the preprocessor, but only under very restricted circumstances, and they are interpreted differently, there.
#if ((uintmax_t)-1UL) == SOMETHING
..
#endif
Here the value on the left evalues to UINTMAX_MAX
in the preprocessor and in later compiler phases. So
#define UINTMAX_MAX ((uintmax_t)-1UL)
would be an appropriate definition for a compiler implementation.
To see the value for the preprocessor, observe that there (uintmax_t)
is not a cast but an unknown identifier token inside ()
and that it evaluates to 0
. The minus sign is then interpreted as binary minus and so we have 0-1UL
which is unsigned and thus the max value of the type. But that trick only works if the cast contains a single identifier token, not if it has three as in your example, and if the integer constant has a -
or +
sign.
Upvotes: 3
Reputation: 63471
They are trying to ensure that the type of the value 0
is unsigned long
. When you assign zero to a variable, it gets cast to the appropriate type.
In this case, if 0
doesn't happen to be an unsigned long
then the ~
operator will be applied to whatever other type it happens to be and the result of that will be cast.
This would be a problem if the compiler decided that 0
is a short
or char
.
However, the type after the ~
operator should remain the same. So they are being overly cautious with the outer cast, but perhaps the inner cast is justified.
They could of course have specified the correct zero type to begin with by writing ~0UL
.
Upvotes: 0