Reputation: 197
For the moment I need to ignore portability issues and operate on the assumption that the long double data type is a consistent 16-byte value on every architecture and it is always stored in memory as a IEEE 754 quadruple-precision binary floating-point format. This means that the left most bit is the sign bit followed by 15 bits of exponent with another 112 bits of binary data to represent the significant decimal digits. Not in decimal or BCD of course, this is all in binary.
Therefore the representation of the number three in memory looks like this :
(dbx) x &ld3 / 1 E
0xffffffff7ffff400: 0x40008000 0x00000000 0x00000000 0x00000000
I get that from my debugger due to this line in C :
long double ld3 = 3.0;
Here we see the first 16 bits, that is to say the left-most 16-bits, are :
0x4000h = 0100 0000 0000 0000b
Here the sign bit is a zero which therefore indicates positive value by the IEEE754 rules and then we have 15-bits of the exponent value 100 0000 0000 0000b. It took me a while to read and then re-read again the docs at :
http://en.wikipedia.org/wiki/Quadruple_precision
That exponent, taken at face value, is 2^14 which is 16,384. However there is a "zero offset" value of (2^14) - 1 which is 16,383 thus my exponent above is actually just 16,384 - 16,383 = 1. Good stuff so far.
The data in the actual number area, the next 112 bits is 0x8000 0000 ... 0000h which looks to be 1000 0000 ... 0000 just seems wrong. A binary value for three should be two bits of one followed by a stack of zeros. So that bothers me.
So I wanted to write a code bit that would print out a long double variable as a sequence of hexadecimal bytes. However I run into the problem of actually getting at those bytes as offsets from the address of the variable.
I try this :
uint8_t j = 0;
j = ( (uint8_t *)(&ld3) + 2 );
fprintf ( stdout , " &ld3 = %p\n", &ld3 );
fprintf ( stdout , "byte 2 of ld3 = 0x%02x\n", j );
I see this :
&ld3 = ffffffff7ffff4b0
byte 2 of ld3 = 0xb2
I don't know where that is coming from but it can't be from within the memory region occupied by the variable ld3.
So I was hoping to have a for loop which walked across the 16 bytes of a long double and just printed out the hex values at each byte but I think I am getting something terribly wrong about casting a pointer or address or something.
So I guess the question is ... what is the magic foo to get at a specific byte within a long double variable?
follow up : armed with the advice provided below I am able to show this working case where we see a little bit of noise or error in the last two bits.
#include <ctype.h>
#include <errno.h>
#include <math.h>
#include <stddef.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
long double two_ld = 2.0;
long double e64_ld;
uint8_t byt = 0, k;
e64_ld = expl(logl(two_ld)*(long double)64.0);
fprintf ( stdout , " sizeof (_ld) = %i\n", sizeof(e64_ld) );
fprintf ( stdout , " ( 2 ^ 64 ) = %36.32Le\n", e64_ld );
fprintf ( stdout , " = 0x" );
for ( k = 0; k<16; k++ )
fprintf ( stdout , "%02x ", * ( (uint8_t *)(&e64_ld) + k ) );
fprintf ( stdout , "\n" );
return ( EXIT_SUCCESS );
}
Where the output shows the bits on the rightmost side have values that really should not be there :
sizeof (_ld) = 16
( 2 ^ 64 ) = 1.84467440737095516160000000000000e+19
= 0x40 3f 00 00 00 00 00 00 00 00 00 00 00 00 00 02
Still, not bad given that I calculated 2^64 using logarithms.
Upvotes: 1
Views: 569
Reputation: 52592
Your assumption is quite wrong. I have encountered three different long double representations on popular computers:
long double = 64 bit IEEE format (ARM processor, PowerPC, x86 with certain compilers)
long double = 80 bit IEEE format plus 48 unused bits (x86 with different compilers)
long double = two 64 bit IEEE numbers (PowerPC with certain compiler options). Here a long double is a pair of doubles (x, y) with the property that round (x + y) = x.
Upvotes: 0
Reputation: 213059
You are missing a dereference and so are trying to assign a pointer to a char. This is why you see b2
(LSB of address = b0
+ 2
= b2
).
Change:
j = ( (uint8_t *)(&ld3) + 2 );
to:
j = * ( (uint8_t *)(&ld3) + 2 );
^^^
missing
dereference
or perhaps consider the more easily read form:
j = ((uint8_t *)&ld3)[2];
Note also that if you had enabled compiler warnings (e.g. gcc -Wall ...
) then the compiler would have notified you of the problem. ALways compile with warnings enabled and take notice of any warnings.
Upvotes: 2