Reputation: 3256
For the last day I have had a nasty bug in my code which after some searching appears to be related to comparison between char values and hex. My compiler is gcc 4.4.1 running on Windows. I replicated the problem in the simple code below:
char c1 = 0xFF; char c2 = 0xFE;
if(c1 == 0xFF && c2 == 0xFE)
{
//do something
}
Surprisingly the code above does not get in the loop. I have absolutely no idea why and would really appreciate some help on this. It is so absurd that the solution must be (as always) a huge mistake on my part that I totally overlooked.
If I replace the above with unsigned chars it works, but only in some cases. I am struggling to find out what's going on. In addition if I cast the hex values to char in comparison it enters the loop correctly like so:
if(c1 == (char)0xFF && c2 == (char)0xFE)
{
//do something
}
What does that mean? Why can it be happening? Isn't the raw hex value interpreted as a char by default? For the curious the point in my code where I first noticed it is comparison of first 2 bytes of a stream with the above hex value and their reverse to idenity the Byte Order Mark.
Any help is appreciated
Upvotes: 13
Views: 33794
Reputation: 1717
When comparing char to hex you must be careful:
Using the == operator to compare a char to 0x80 always results in false?
I would recommend this syntax introduced in C99 to be sure
if(c1 == '\xFF' && c2 == '\xFE')
{
// do something
}
Avoid the cast, it's unnecessary and isn't type safe.
It tells the compiler that the 0xFF is a char rather than an int, this will solve your issue.
clang compiler will warn you about this also:
comparison of constant 128 with expression of type 'char' is always false [-Werror,-Wtautological-constant-out-of-range-compare]
Upvotes: 9
Reputation: 9
I solved it by casting my variables to UINT16 (optimized for my compiler). In your case, you would be casting c1 and c2 as INT
char c1 = 0xFF; char c2 = 0xFE;
if((int)c1 == 0xFF && (int)c2 == 0xFE)
{
//do something
}
Upvotes: 0
Reputation: 881373
The literal 0xff
is not a char
, it's a int
(signed). When you shoehorn that into a char
variable, it will fit okay but, depending on whether your char
type is signed or unsigned, this will affect how it's upgraded in expressions (see below).
In an expression like if (c1 == 0xff)
, the c1
variable will be promoted to an integer since that's what 0xff
is. And what it's promoted to depends on whether it's signed or not.
Bottom line, you can do one of two things.
Ensure that you use signed char
so that it "sign-extends" to the correct int
. By that I mean an unsigned char 0xff
would become (for a 4-byte int
) 0x000000ff
(so it's still 255) but a signed one would become 0xffffffff
(so it's still -1).
Shoehorn the literal into the same type as the variable, which you're already doing with (char)oxff
.
Upvotes: 4
Reputation: 59997
The character 0xFE will he translated into a negative integer. The constants in the expression will be translated into positive integers.
Upvotes: 0
Reputation: 753675
Plain char
can be signed
or unsigned
. If the type is unsigned
, then all works as you'd expect.
If the type is signed
, then assigning 0xFF to c1
means that the value will be promoted to -1
when the comparison is executed, but the 0xFF is a regular positive integer, so the comparison of -1 == 0xFF
fails.
Note that the types char
, signed char
and unsigned char
are distinct, but two of them have the same representation (and one of the two is char
).
Upvotes: 13