Kenneth
Kenneth

Reputation: 1167

Nitpicking booleans in C

I was reading comp.lang.cs description of booleans values, pre-C99. It mentions that some people prefer to define their own boolean values as:

#define TRUE (1==1)
#define FALSE (!TRUE)

However, the standard defines the equality operator to always return a signed int with a value of 1 when two values compare equal (C11 - 6.5.9) and the logical not operator shall return a int with a value of 0 if the value compares unequal to 0 (C11 - 6.5.3.3).

If this is the case and the above definitions use literals, won't the evaluation happen compile time and the resulting definitions be:

#define TRUE (1)
#define FALSE (0)

And a follow-up question. Is there any case where it makes sense to define the true and false labels to anything other than 1 and 0, respectively?

And pardon that I reference C11 when my question concerns C89 but I only have the C11 standard at hand.

Upvotes: 2

Views: 113

Answers (1)

user743382
user743382

Reputation:

(1==1) and (!TRUE) are useful definitions on some compilers (I don't have a concrete example off of the top of my head) that track whether an integer came from a boolean comparison. This enables them to warn for

if (i)

while at the same time not warning for

if (i != 0)

and also not warning for

j = i != 0;
if (j)

even though in all three cases, the conditional is a non-constant int.

This way, no warning would be generated for int b = TRUE;...if (b), since b would be considered a truth-integer.

You can make a legitimate argument that such warnings are useless, but others can make an equally legitimate argument that such warnings do have use. It will have many false positives in common code, but it may make code more readable if it is written in a way that avoids such warnings.

At the same time, such definitions are harmless for other compilers that do not track this, since they just see constant expressions that evaluate to 1 and 0.

Upvotes: 3

Related Questions