Reputation: 153
I have this struct:
struct block{
uint8_t *tBlock;
}
This struct will have 1024 bytes so tBlock = malloc(1024)
.
I have an integer that I want to write in 4 bytes so tBlock[0] to tBlock[3] in little endian. I have this :
uint8_t little[4];
void inttolitend(uint32_t x, uint8_t* lit_int){
lit_int[3] = (uint8_t)x / (256*256*256);
lit_int[2] = (uint8_t)(x % (256*256*256)) / (256*256);
lit_int[1] = (uint8_t)((x % (256*256*256)) % (256*256)) / 256;
lit_int[0] = (uint8_t)((x % (256*256*256)) % (256*256)) % 256;
}
But when I do:
int x = 7;
inttolitend(x, little);
I got little[0] = 7, little[1] = 0, little[2] = 0 and little[3] = 0 so I totally fail my converter. How could I get 7 in uint8_t in 4 bytes?
Upvotes: 3
Views: 11668
Reputation: 18533
Here is the standard way to do it - nice and concise:
void inttolitend(uint32_t x, uint8_t *lit_int) {
lit_int[0] = (uint8_t)(x >> 0);
lit_int[1] = (uint8_t)(x >> 8);
lit_int[2] = (uint8_t)(x >> 16);
lit_int[3] = (uint8_t)(x >> 24);
}
Or using arithmetic similar to your question:
void inttolitend(uint32_t x, uint8_t *lit_int) {
lit_int[0] = (uint8_t)(x % 256);
lit_int[1] = (uint8_t)(x / 256 % 256);
lit_int[2] = (uint8_t)(x / 256 / 256 % 256);
lit_int[3] = (uint8_t)(x / 256 / 256 / 256 % 256);
}
Addendum:
The reverse conversion - idiomatic:
uint32_t litendtoint(uint8_t *lit_int) {
return (uint32_t)lit_int[0] << 0
| (uint32_t)lit_int[1] << 8
| (uint32_t)lit_int[2] << 16
| (uint32_t)lit_int[3] << 24;
}
Or using arithmetic similar to your question:
uint32_t litendtoint(uint8_t *lit_int) {
return (uint32_t)lit_int[0]
+ (uint32_t)lit_int[1] * 256
+ (uint32_t)lit_int[2] * 256 * 256
+ (uint32_t)lit_int[3] * 256 * 256 * 256;
}
Upvotes: 5
Reputation: 11406
void inttolitend(uint32_t x, uint8_t* lit_int){
lit_int[0] = x & 0xff;
lit_int[1] = (x>> 8) & 0xff;
lit_int[2] = (x>> 16) & 0xff;
lit_int[3] = (x>> 24) & 0xff;
}
I got little[0] = 7, little[1] = 0, little[2] = 0 and little[3] = 0
Btw, this is Little Endian for 7.
Upvotes: 3
Reputation: 153457
OP uses of lit_int[3] = (uint8_t)x / (256*256*256);
mistakenly did the cast before the division.
void inttolitend(uint32_t x, uint8_t* lit_int){
lit_int[3] = (uint8_t) (x / 16777216);
lit_int[2] = (uint8_t) (x / 65536);
lit_int[1] = (uint8_t) (x / 256);
lit_int[0] = (uint8_t) x;
}
Calling int x = 7; inttolitend(x, little);
is a problem if int
is not the same as int32_t
.
256*256*256
overflow on 16-bit systems.
Upvotes: 2