Jim McAdams
Jim McAdams

Reputation: 1104

C++: Why is the value assignment interpretation always int?

I'd like to assign a value to a variable like this:

double var = 0xFFFFFFFF;

As a result var gets the value 65535.0 assigned. Since the compiler assumes a 64bit target system the number literal (i.e. all respective 32 bits) is interpreted significand precision bits. However, since 0xFFFF FFFF is just a notation for a bit pattern, without any hint about the representation, it could be quite differently interpreted w.r.t. becoming a floating point value. Thus, I was wondering if there is a way to manipulate this fixed interpretation of the value. In other words, give a hint about the desired representation. (Maybe someone could also point me to part in the standard where this implicit interpretation is defined).

So far, the default precision interpretation on my system seems to be

(int)0xFFFFFFFF x 100.

enter image description here Only the fraction field is getting filled1.

So maybe (here: for 16 bit cross-compilation) I want it to be a different representation like:

(int)0xFFFFFF x 10(int)0xFF

(ignoring the sign bit for a moment).

Thus my question: How can I force a custom double interpretation of the hex literal notation?


1 Even when my hex literal would be 0xFFFF FFFF FFFF FFFF the value is only interpreted as the fraction part - so clearly, bits should be used for exponent and sign field. But it seems the literal gets just cut off.

Upvotes: 1

Views: 529

Answers (3)

TheCppZoo
TheCppZoo

Reputation: 1242

There does not seem to be a direct way to initialize a double variable with an hexadecimal pattern, the c-style cast is equivalent to a C++ static_cast and the reinterpret_cast will complain it can't perform the conversion. I will give you two options, one simple solution but that will not initialize directly the variable, and a complicated one. You can do the following:

double var; *reinterpret_cast<long *>(&var) = 0xFFFF;

Note: watch out that I would expect you to want to initialize all 64 bits of the double, your constant 0xFFFF seems small, it gives 3.23786e-319

A literal value that begins with 0x is an hexadecimal number of type unsigned int. You should use the suffix ul to make it a literal of unsigned long, which in most architectures will mean a 64 bit unsigned; or, #include <stdint.h> and do for example uint64_t(0xABCDFE13)

Now for the complicated stuff: In old C++ you can program a function that converts the integral constant to a double, but it won't be constexpr.

In constexpr functions you can't make reinterpret_cast. Then, your only choice to make a constexpr converter to double is to use an union in the middle, for example:

struct longOrDouble {
    union {
        unsigned long asLong;
        double asDouble;
    };
    constexpr longOrDouble(unsigned long v) noexcept: asLong(v) {}
};

constexpr double toDouble(long v) { return longOrDouble(v).asDouble; }

This is a bit complicated, but this answers your question. Now, you can write: double var = toDouble(0xFFFF) And this will insert the given binary pattern into the double.

Using union to write to one member and read from another is undefined behavior in C++, there is an excellent question and excellent answers on this right here: Accessing inactive union member and undefined behavior?

Upvotes: 3

πάντα ῥεῖ
πάντα ῥεῖ

Reputation: 1

"I was wondering if there is a way to manipulate this interpretation."

Yes, you can use a reinterpret_cast<double&> via address, to force type (re-)interpretation from a certain bit pattern in memory.

"Thus my question: How can I force double interpretation of the hex notation?"

You can also use a union, to make it clearer:

union uint64_2_double {
    uint64_t bits;
    double dValue;
};

uint64_2_double x;
x.bits = 0x000000000000FFFF;

std::cout << x.dValue << std::endl;

Upvotes: 3

Anton Savin
Anton Savin

Reputation: 41301

C++ doesn't specify the in-memory representation for double, moreover, it doesn't even specify the in-memory representation of integer types (and it can really be different on systems with different endings). So if you want to interpret bytes 0xFF, 0xFF as a double, you can do something like:

uint8_t bytes[sizeof(double)] = {0xFF, 0xFF};
double var;
memcpy(&var, bytes, sizeof(double));

Note that using unions or reinterpret_casting pointers is, strictly speaking, undefined behavior, though in practice also works.

Upvotes: 3

Related Questions