Reputation: 3797
I have a code snippet(up.cpp) like this:
#include <stdio.h>
typedef unsigned long long Uint64;
int main()
{
void *p = (void*)0xC0001234;
Uint64 u64 = (Uint64)p;
printf("{%llx}\n", u64);
return 0;
}
Compiling it with 32-bit gcc 4.8.1, I get output:
{ffffffffc0001234}
Compiling it with 64-bit gcc 4.8.1, I get output:
{c0001234}
Yes, the 64-bit one gives the 32-bit value. The gcc 4.8.1 is from openSUSE 13.1 .
I also tried it on Visual C++ 2010 x86 & x64 complier(a bit code change, __int64
and %I64x
), and surprising get the same results.
Of course, I intend to get {c0001234}
on both x86 & x64. But why is there such difference?
Upvotes: 3
Views: 1756
Reputation: 320719
The behavior of this
Uint64 u64 = (Uint64)p;
is not defined by the language. It is implementation-defined.
While 64-bit platforms will probably implement this as purely conceptual conversion (the pointer "fills" the entire target value), on 32-bit platforms implementations are faced with a dilemma: how to extend a 32-bit pointer value to a 64-bit integer value, with sign-extension or without? Apparently, your implementations decided to sign-extend the value.
Apparently your implementations believe that in a pointer-to-unsigned conversion the original pointer value should act as a signed value of sorts. This would be a rather strange decision though. I cannot reproduce it in GCC: http://coliru.stacked-crooked.com/a/9089ccda625bd65d
If that's really what happens on your platform, you should be able to suppress this behavior by converting to uintptr_t
first (as an intermediate type), as suggested by @barak manos in the comments.
Upvotes: 5