Reputation: 552
I have this code I use for getting the mantissa, or value, of an IEEE binary number.
iFloat_t floatGetVal (iFloat_t x) {
iFloat_t mantissa = (BITS == 16) ? (x & 0x03FF)
: (x & 0x007FFFFF);
debug("%s: getVal before implicit 1", getBinary(mantissa));
//mantissa = (BITS == 16) ? (mantissa | 0x04)
// : (mantissa | 0x008);
mantissa = x | 0000010000000000;
debug("%s: getVal after implicit 1", getBinary(mantissa));
mantissa = (BITS == 16) ? (mantissa & 0x07FF)
: (mantissa & 0x00FFFFFF);
if(floatGetSign(x) == 1) {
mantissa = ~mantissa + 1;
}
return mantissa;
}
My issue is when I try to get the value from the number 63.125, here is the corresponding output:
DEBUG iFloat.c[31] floatGetVal() 0000-0011-1110-0100: getVal before implicit 1
DEBUG iFloat.c[35] floatGetVal() 0101-0011-1110-0100: getVal after implicit 1
DEBUG iFloat.c[81] floatAdd() 0000-0011-1110-0100: bits of val_y after assignment
This is my expected output:
DEBUG iFloat.c[31] floatGetVal() 0000-0011-1110-0100: getVal before implicit 1
DEBUG iFloat.c[35] floatGetVal() 0101-0111-1110-0100: getVal after implicit 1
DEBUG iFloat.c[81] floatAdd() 0000-0111-1110-0100: bits of val_y after assignment
Here is my full code:
Upvotes: 0
Views: 143
Reputation: 15278
0000010000000000
is an octal literal, not a binary one, use hexadecimal 0x0400
.
Upvotes: 7