Reputation: 2613
I have a 10 bit signed value. It's actually a 10 bit color value that i want to convert to an int
.
The 10 bits are to be interpreted as a signed value within a larger int
, where the other bits are zero.
I can think of 3 ways, shown here:
#include <stdio.h>
void p(int v)
{
printf("\t%d", v);
}
int main()
{
for (int i = -2; i < 3; ++i)
{
unsigned int u = i & 0x3ff;
int v;
// method 1: test the bit
v = u;
if (u & 0x200) v = v ^ -0x400;
p(v);
// method 2: shift
v = u<<(sizeof(int)*8-10);
v >>= (sizeof(int)*8-10);
p(v);
// method 3: use sign extend
v = (signed char)(u >> 2);
v <<= 2;
v |= u;
p(v);
printf("\n");
}
return 0;
}
is there a better way? Often there is with this bit twiddling problems.
Thanks for any input.
Adding two more methods. The first suggested by @cruz-jean is bit fields, and the second suggested by gcc after compiling the bitfields.
// method 4: bit fields
struct { int v:10; } f;
f.v = u;
v = f.v;
// method 5: sign extend short
v = (signed short)(u << 6);
v >>= 6;
Out of interest it appears MSVC compiles #4 into #2 and gcc compiles #4 into #5.
Method 5 looks good.
Upvotes: 4
Views: 4616
Reputation: 126243
Another one to try:
v = u | (0 - (u&0x200));
good for cpus where shift is slow.
Upvotes: 4
Reputation: 2819
You could declare a 10-bit signed bitfield helper type and access the value as an int
:
struct signed_10_bit
{
int val : 10;
};
int some_10_bit_num = 0x3ff;
int extended = signed_10_bit{ some_10_bit_num }.val;
Upvotes: 3