Sam
Sam

Reputation: 17

Difference between writing an integer in a HEX from a real

I was given a task at the university, there is a number and I need to display it in HEX as it is presented on the computer. I wrote a program for translating signed integers. And I also found a real number entry in HEX. But it is different from the usual.

For integers i use: printf("%#X", d);

For reals i use: printf("%#lX", r);

If i input 12, first prints: 0xC

If i input 12.0, second prints: 0x4028000000000000

Can you explain what the difference and how it's calculate?

Upvotes: 0

Views: 711

Answers (3)

chqrlie
chqrlie

Reputation: 144969

Printing double value r using format %#lX actually has undefined behavior.

You have been lucky to get the 64-bits that represent the value 12.0 as a double, unless r has type unsigned long and was initialized from the value 12.0 this way:

unsigned long r;
double d = 12.0;
memcpy(&r, &d, sizeof r);
printf("%#lX", r);

But type unsigned long does not have 64-bits on all platforms, indeed it does not on the 32-bit intel ABI. You should use the type uint64_t from <stdint.h> and the conversion format from <inttypes.h>:

#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#include <string.h>

int main() {
    int x = 12;
    printf("int: %#X  [", x);
    for (size_t i = 0; i < sizeof x; i++) {
        printf(" %02X", ((unsigned char *)&x)[i]);
    }
    printf(" ]\n");

    double d = 12.0;
    uint64_t r;
    memcpy(&r, &d, sizeof r);
    printf("double: %#"PRIX64" [", r);
    for (size_t i = 0; i < sizeof d; i++) {
        printf(" %02X", ((unsigned char *)&d)[i]);
    }
    printf(" ]\n");
    printf("sign bit: %d\n", (int)(r >> 63));
    printf("exponent: %d\n", (int)((r >> 52) & 2047));
    unsigned long long mantissa = r & ((1ULL << 52) - 1);
    printf("mantissa: %#llX, %.17f\n",
           mantissa, 1 + (double)mantissa / (1ULL << 52));
    return 0;
}

Output:

int: 0XC  [ 0C 00 00 00 ]
double: 0X4028000000000000 [ 00 00 00 00 00 00 28 40 ]
sign bit: 0
exponent: 1026
mantissa: 0X8000000000000, 1.50000000000000000

As explained in the article Double-precision floating-point format, this representation corresponds to a positive number with value 1.5*21026-1023, ie: 1.5*8 = 12.0.

Upvotes: 2

chux
chux

Reputation: 154075

To produce the encoding of a number in hex is a simple memory dump.

The process is not so different among types.


The below passes the address of the object and its size to form a string for printing.

#include <stdio.h>
#include <assert.h>
#include <limits.h>

//                                    .... compound literal ....................
#define VAR_TO_STR_HEX(x) obj_to_hex((char [(sizeof(x)*CHAR_BIT + 3)/4 + 1]){""}, &(x), sizeof (x))

char *obj_to_hex(char *dest, void *object, size_t osize) {
  const unsigned char *p = (const unsigned char *) object;
  p += osize;
  char *s = dest;
  while (osize-- > 0) {
    p--;
    unsigned i = (CHAR_BIT + 3)/4;
    while (i-- > 0) {
      unsigned digit = (*p >> (i*4)) & 15;
      *s++ = "0123456789ABCDEF"[digit];
    }
  }
  *s = '\0';
  return dest;
}

int main(void) {
  double d = 12.0;
  int i = 12;
  printf("double %s\tint %s\n", VAR_TO_STR_HEX(d), VAR_TO_STR_HEX(i) );
  d = -d;
  i = -i;
  printf("double %s\tint %s\n", VAR_TO_STR_HEX(d), VAR_TO_STR_HEX(i) );
  return 0;
}

Output

double 4028000000000000 int 0000000C
double C028000000000000 int FFFFFFF4

With more complex objects, the output may include padding bits/bytes and the output is sensitive to endian.

Upvotes: 0

Chris Dodd
Chris Dodd

Reputation: 126418

The X format specifier expects an int or unsigned int argument. With the l modifier it expects a long or unsigned long int argument. If you call it with anything else (such as a double) you get undefined behavior.

If you want to print a hex float (with uppercase letters), use %A format, which for 12.0 will print 0X1.8P+3 -- 1½×23

Upvotes: 2

Related Questions