Reputation: 708
I am programming an Atmel SAMD20 in C. I came upon an error, that I have now fixed, but I'm not quite sure why it happened in the first place. Can someone point it out to me? (it's probably far too obvious, and I'm going to facepalm later.)
An array of sensors is generating uint16_t
data, which I converted to uint8_t
to send over I2C. So, this is how I originally wrote it:
for (i = 0; i < SENSBUS1_COUNT; ++i)
{
write_buffer[ (i*2) ] = (uint8_t) sample_sensbus1[i] & 0xff;
write_buffer[(i*2)+1] = (uint8_t) sample_sensbus1[i] >> 8;
}
Here, write_buffer
is uint8_t
and sample_sensbus1
is uint16_t
.
This, for some reason, ends up messing up the most significant byte (in most cases, the most significant byte is just 1 (i.e. 0x100)). This, on the other hand, works fine, and is exactly what it should be:
for (i = 0; i < SENSBUS1_COUNT; ++i)
{
write_buffer[ (i*2) ] = sample_sensbus1[i] & 0xff;
write_buffer[(i*2)+1] = sample_sensbus1[i] >> 8;
}
Clearly, the implicit cast is smarter than I am.
What is going on?
Upvotes: 3
Views: 181
Reputation: 754910
Your cast converts the uint16_t
to uint8_t
before it does the shift or mask. It is treated as though you wrote:
write_buffer[ (i*2) ] = ((uint8_t)sample_sensbus1[i]) & 0xff;
write_buffer[(i*2)+1] = ((uint8_t)sample_sensbus1[i]) >> 8;
You might need:
write_buffer[ (i*2) ] = (uint8_t)(sample_sensbus1[i] & 0xff);
write_buffer[(i*2)+1] = (uint8_t)(sample_sensbus1[i] >> 8);
In practice, the uncast version is OK too. Remember, a cast tells the compiler "I know more about this than you do; do as I say". That's dangerous if you don't know more than the compiler. Avoid casts whenever you can.
You might also note that shifting (left or right) by the size of the type in bits (or more) is undefined behaviour. However, the ((uint8_t)sample_sensbus[i]) >> 8
is not undefined behaviour, because of the 'usual arithmetic conversions' which mean that the result of (uint8_t)sample_sensbus[i]
is converted to int
before the shift occurs, and the size of an int
cannot be 8 bits (it must be at least 16 bits to satisfy the standard), so the shift is not too big.
Upvotes: 4
Reputation: 18381
The problem is in this expression:
(uint8_t) sample_sensbus1[i] >> 8;
It is doing the following sequence:
sample_sensbus1[i]
to uint8_t
, effectively truncating it to the 8 least significant bits. This is where you are losing your data. int
as a part of usual arithmetic conversions, making an int with only 8 lower bits set. int
right 8
bits, effectively making the whole expression zero.Upvotes: 3
Reputation: 85887
Casting is a unary prefix operator and as such has very high precedence.
(uint8_t) sample_sensbus1[i] & 0xff
parses as
((uint8_t)sample_sensbus1[i]) & 0xff
In this case & 0xff
is redundant. But:
(uint8_t) sample_sensbus1[i] >> 8
parses as
((uint8_t)sample_sensbus1[i]) >> 8
Here the cast truncates the number to 8 bits, then >> 8
shifts everything out.
Upvotes: 3
Reputation: 17060
write_buffer[(i*2)+1] = (uint8_t) sample_sensbus1[i] >> 8;
This is equivalent to:
write_buffer[(i*2)+1] = ((uint8_t) sample_sensbus1[i]) >> 8;
As you see, it does the cast before it does the shift. Your most significant byte is now gone.
This should work, though:
write_buffer[(i*2)+1] = (uint8_t) (sample_sensbus1[i] >> 8);
Upvotes: 4
Reputation: 8343
This is a question of operator precedence. In the first example, you are first converting to uint8_t
and are applying the &
and >>
operators second. In the second example, those are applied before the implicit conversion takes place.
Upvotes: 3