Reputation: 61
The following C code will result in passing 0xFFFFFFFFFFFFFFFF
to malloc()
instead of the expected 0, when compiled for x64 by Visual Studio 2013:
#include <stdlib.h>
int main(int argc, char *argv[]) {
int x = -1;
void *p = malloc(x + 1);
}
Opening the disassembly view reveals this strange snippet (Debug configuration, although Release is functionally the same):
; int x = -1;
mov dword ptr [x],0FFFFFFFFh
; void *p = malloc(x + 1);
mov eax,dword ptr [x]
add eax,1
mov eax,eax
mov rcx,0FFFFFFFFFFFFFFFFh
cmovb rax,rcx
mov rcx,rax
call qword ptr [__imp_malloc (07F79C80B228h)]
mov qword ptr [p],rax
Casting to size_t won't change anything, but storing the result to a temporary variable and then passing that to malloc() will.
Strangely, this does not happen when calling any other function similarly declared:
void * __cdecl foo(size_t y) {
return NULL;
}
int main(int argc, char *argv[]) {
int x = -1;
void *p = foo(x + 1);
}
In this case, the correct code is generated (note the missing cmovb
stuff):
; int x = -1;
mov dword ptr [x],0FFFFFFFFh
; void *p = foo(x + 1);
mov eax,dword ptr [x]
inc eax
cdqe
mov rcx,rax
call foo (07F6AB84100Ah)
mov qword ptr [p],rax
I hesitate to call this a code generation bug. I must assume it's something I'm missing. However, I've never seen this before and it certainly produces incorrect behavior.
Why is this happening?
Upvotes: 0
Views: 324
Reputation: 54614
It's a safeguard against integer overflow (as referenced in the comments here).
If the value passed to malloc
is the result of an integer overflow (signed or unsigned), rather than letting the program allocate less memory than the compiler thinks it expected, it maxes out the expression and attempts to allocate that.
Upvotes: 2