Reputation: 5030
Here is my code:
long x1 = -123;
long y1 = -312;
long x2 = -111;
long y2 = -112;
long packed = x1 | y1 << 15 | x2 << 30 | y2 << 45;
Debug.log("x1:" + ((packed) & 0b111111111111111));
Debug.log("y1:" + ((packed >> 15) & 0b111111111111111));
Debug.log("x2:" + ((packed >> 30) & 0b111111111111111));
Debug.log("y2:" + ((packed >> 45) & 0b111111111111111));
I need x1, x2, x3, x4 to be up to 16384(2^14). So with added +- sign, it's total 15 bits. Why am I getting incorrect values?
Upvotes: 0
Views: 254
Reputation:
Try this
long pack(long v, int shift) {
return (v & 0b111111111111111) << (shift * 15);
}
long unpack(long v, int shift) {
long r = v >> (shift * 15) & 0b111111111111111;
if ((r & 0b100000000000000) != 0)
r |= ~0b111111111111111;
return r;
}
and
long x1 = -123;
long y1 = -312;
long x2 = -111;
long y2 = -112;
long packed = pack(x1, 0) | pack(y1, 1) | pack(x2, 2) | pack(y2, 3);
Debug.log("x1:" + unpack(packed, 0));
Debug.log("y1:" + unpack(packed, 1));
Debug.log("x2:" + unpack(packed, 2));
Debug.log("y2:" + unpack(packed, 3));
generalized version:
long pack(long v, int width, int shift) {
long mask = -1 >>> Long.SIZE - width;
return (v & mask) << (shift * width);
}
long unpack(long v, int width, int shift) {
long mask = -1 >>> Long.SIZE - width;
long r = v >> (shift * width) & mask;
if ((r & (1 << width - 1)) != 0)
r |= ~mask;
return r;
}
specify 15 for argument width
in this case.
Upvotes: 1
Reputation: 25013
I see you deduced (possibly from IronMensan's answer) that the extra set bits were the problem and need to be masked off.
As penance for my back-to-front comment, I wrote some code and generalised it a bit to reduce the use of "magic numbers". It's in C#, but that's quite similar to Java for this purpose:
long x1 = -345;
long y1 = 299;
long x2 = -111;
long y2 = -112;
int bitLengthToPack = 15;
long signBit = 1 << (bitLengthToPack - 1);
long dataBits = (signBit << 1) - 1;
long upperBits = ~dataBits;
long packed = (x1 & dataBits) | (y1 & dataBits) << bitLengthToPack | (x2 & dataBits) << (bitLengthToPack * 2) | (y2 & dataBits) << (bitLengthToPack * 3);
long x1e = packed & dataBits;
if ((x1e & signBit) > 0)
{
x1e = x1e | upperBits;
}
Console.WriteLine("x1e: " + x1e);
long y1e = (packed >> bitLengthToPack) & dataBits;
if ((y1e & signBit) > 0)
{
y1e = y1e | upperBits;
}
Console.WriteLine("y1e: " + y1e);
long x2e = (packed >> (bitLengthToPack * 2)) & dataBits;
if ((x2e & signBit) > 0)
{
x2e = x2e | upperBits;
}
Console.WriteLine("x2e: " + x2e);
long y2e = (packed >> (bitLengthToPack * 3)) & dataBits;
if ((y2e & signBit) > 0)
{
y2e = y2e | upperBits;
}
Console.WriteLine("y2e: " + y2e);
If you had a smaller range of values, you could reduce the value of bitLengthToPack
. I couldn't think of a way to remove the conditionals from the unpacking; perhaps someone more ingenious than me could.
Upvotes: 1
Reputation: 5030
I managed to solve it this way:
long packed = x1>0?x1:(-x1) | (x1>0?1:0) << 15;
long decodedX1 = (((packed) & 0b11111111111111));
if(((packed >> 14) & 0b1) == 0)
decodedX1 = -decodedX1;
I know, looks nasty. We force it to be positive and then manually set a sign bit.
Upvotes: 1
Reputation: 6821
You need to mask off the bits you're interested in before packing the values together. Most of the bits in -111 as a long are set and they're getting or'ed in with the other values you're packing.
Using a bitfield might be easier for you and you can let the compiler deal with all the masking and shifting for you:
struct packedCoords
{
long long x1 : 15;
long long y1 : 15;
long long x2 : 15;
long long y2 : 15;
};
packedCoords test;
test.x1 = -123;
test.y1 = -312;
test.x2 = -111;
test.y2 = -112;
printf("sizeof packedCoords = %d\n", sizeof(packedCoords));
printf("x1: %d\n", static_cast<int>(test.x1));
printf("y1: %d\n", static_cast<int>(test.y1));
printf("x2: %d\n", static_cast<int>(test.x2));
printf("y2: %d\n", static_cast<int>(test.y2));
Upvotes: 1