Reputation: 11
I am relatively new to C Code and am trying to interpret the outcome of these lines of C that are doing some casting.
ASS_CODE is a a character string of up to 4 characters that may or may not be numeric, ASCII_ZERO is a constant defined as 48 and ASCII_ONE is a constant defined as 49. I'm guessing that either this is trying to force some sort of true or false outcome based on whether ASS_CODE is numeric or otherwise or alternatively testing if the string is '0'or '1', but am not entirely sure
if ((int)(*ASS_CODE) == ASCII_ZERO)
{
calc_SDIS_EL13();
}
else if ((int)(*ASS_CODE) == ASCII_ONE)
{
do_gn11();
} else
{
........
}
}
Upvotes: 0
Views: 90
Reputation: 153457
If char
as in *ASS_CODE
, is a signed char
or an unsigned char
narrower than int
(1 of these 2 is almost always the case) will be promoted to int
. The integer constant 48
is of type int
and so the cast (int)
is unnecessary.
On rare platforms, char
as in *ASS_CODE
, is an unsigned char
as wide as int/unsigned
. *ASS_CODE
is promoted to unsigned
as part of the pending compare operation. Comparing an unsigned
to the integer constant 48
(which is an int
) may warn about comparing int
to unsigned
. The cast in this case will quiet the warning.
Upvotes: 3
Reputation: 180201
I am [...] trying to interpret the outcome of these lines of C that are doing some casting.
[...] ASS_CODE is a a character string of up to 4 characters that may or may not be numeric, ASCII_ZERO is a constant defined as 48
[...]
if ((int)(*ASS_CODE) == ASCII_ZERO)
If ASS_CODE
has type char *
or type char[n]
for some n
, then *ASSCODE
has type char
, and evaluates to the same thing as ASS_CODE[0]
-- i.e., the first character of the string. The literal 48
to which macro ASCII_ZERO
expands represents a value of type int
; specifically, it is the value of the '0'
character in ASCII-compatible character encodings.
Cast operators have higher precedence than the ==
operator, so in evaluating the condition, the value of *ASS_CODE
(a char
) is first converted to type int
and then compared to the right-hand operand (already an int
). The condition evaluates to 1
(true) when the first character of ASS_CODE
is the digit '0'
(assuming an ASCII-compatible encoding). It is conceivable that ASCII_ZERO
is used instead of a character literal ('0'
) because a comparison specifically to the ASCII sense of 0 is desired even when a different character encoding is in use. In any event, that is the effect that will be produced.
I speculate that the reason for the cast was to make both operands have the same type, or maybe just to emphasize that the comparison was being performed on int
values. It has no effect whatever on the semantics of the program, however, because the operands of the ==
operator are subject to "the usual arithmetic conversions" before the operation is evaluated anyway, and in this case, that would involve converting the left-hand operand to int
even without the explicit cast.
Upvotes: 0
Reputation: 41625
char ASS_CODE[4] = { ... };
#define ASCII_ZERO 48
if ((int)(*ASS_CODE) == ASCII_ZERO)
Apparently, this is code for a non-ASCII execution character set, or some developer who is very careful. Otherwise the ASCII_ZERO
constant would not be necessary.
My first thought was that the type char
was equal to unsigned char
on that platform, and that the cast would be needed to suppress a compiler warning. But no matter how I define the constant, GCC doesn't omit any warning. I tried this:
enum { ASCII_ZERO = 48 };
void x() {
char ASS_CODE[4] = {0};
if (ASS_CODE[0] == ASCII_ZERO) {} else {}
}
The next idea is that the code must conform to some industry standard, like MISRA. This standard might require that the left-hand side and the right-hand side of every binary operator must have the same type. And since 48 has type int
, the character must be cast to exactly that type.
Upvotes: 0