Reputation: 1568
I'm looking for some explanation according the output below, Im understand that there is a type-conversion.. but what happens when assigning a value to an double\int datatype? which bits are filled? and which bits the pointers points at?
#include <stdio.h>
int main() {
double d = 2;
double *pd = &d;
int *pi = (int *) &d;
printf("pi = %p, pd = %p, *pd = %f, *pi = %f\n", pi, pd, *pd, (double) *pi);
return 0;
}
output:
pi = 0x7fff5504ba70, pd = 0x7fff5504ba70, *pd = 2.000000, *pi = 0.000000
Upvotes: 1
Views: 399
Reputation: 3054
pi = 0x7fff5504ba70: it is the adresse of d
pd = 0x7fff5504ba70: it is the adresse of d too, regardless of the pointer type
*pd = 2.000000: it should be 4.000000. Is it a typo ? (edit - you corrected your question so now this is correct)
*pi = 0.000000: it is the value of the bits at 0x7fff5504ba70 interpreted as an integer (because pi is a pointer to an int), and then converted to a double (because you casted to double). On my machine, a 4 as a double is 0x4010000000000000. So the first 4 bytes are all 0 (because on my machine the cpu is little-endian, a double is 8 bytes, and an int is 4 bytes).
What is happening is that pi points to the same address as pd, but only the 4 first bytes are considered when you dereference pi, and these 4 bytes are all 0.
Upvotes: 0
Reputation: 45654
double *pd = &d;
int *pi = (int *) &d;
There is no guarantee that _Alignof(double)>=_Alignof(int)
, or that d
just happens to be properly aligned. If it isn't, you have Undefined Behavior.
printf("pi = %p, pd = %p, *pd = %f, *pi = %f\n", pi, pd, *pd, (double) *pi);
Now that printf
-call is a bad beast:
void*
, but is actually a int*
. UB.void*
, but is actually a double*
. UB again.*pi
is an aliasing-violation. UB yet again.In practice, things aren't nearly as dire on modern systems. Only the last one might be dangerous there, though neither is defined.
What you are seeing can be easily explained, assuming none of the UB above does anything "surprising" (Which might be surprising), and the initializer was actually 2 and not 4.
Also assuming that int
is 32 bit and double
is IEEE double-precision floating-point-format, saved little-endian:
bits
63 Sign-bit 0 (positive)
62-52 Biased Exponent (1023+1 == 1024 == 100_0000_0000<sub>2</sub>
51-0 Mantissa (All zeroes, the implicit 1 before the decimal-point is not saved, ever)
Thus, the low-order 32 bits are zero, which are picked up by interpreting them as an int
.
Upvotes: 1
Reputation: 148910
As soon as you try to interpret the internal representation of one type (here double) as the internal representation of another type (here int) you get undefined behaviour : this is not covered by any version of C specification.
Say differently, it is compiler and architecture dependent. Many modern architecture use IEEE 754 for floating point numbers, that is a mantissa (or sigificand) part and an exponent part, all in base 2.
Name Common name Base Significand E min E max Decimal Decimal
digits digits E max
binary32 Single precision 2 24 −126 +127 7.22 38.23
binary64 Double precision 2 53 −1022 +1023 15.95 307.95
But nothing forces a particular architecture to use such a representation so you code leads to UB unless you document it to be adapted to one particular system.
References :
Upvotes: 0
Reputation: 4752
You are lying to the compiler.
You are pointing pi
to the address of d
. This doesn't change the fact that pi
points at an object that still has the type and bit representation of a double
.
When you access (double)*pi
, your program will try to interpret the bits of the double as if it was an int
. The result will (typically) be a nonsense value which is then converted to double
.
It may even crash your program if the pointer is not suitably aligned for accessing an int
.
Upvotes: 0