Fire Lancer
Fire Lancer

Reputation: 30145

How near to the maximum pointer value can an object be to avoid overflow?

How close to the maximum value can a valid pointer be (as a global, allocated on the stack, malloc, new, VirtualAlloc, or any other alloc method a program/library might use), such that ptr + n risks overflowing?

I come across a lot of code that adds values to pointers when dealing with strings/arrays (in C++ sometimes also in a generic "random access iterator" template function).

e.g.

auto end = arr_ptr + len; //or just whatever some_container.end() returns
for (auto i = begin; i < end; ++i) { ... }
for (auto i = begin; i + 2 <= end; i += 2) { ...i[0]...i[1]... }
if (arr_ptr + 4 <= end && memcmp(arr_ptr, "test", 4) == 0) { ... }
if (arr_ptr + count > end) resize(...);

Would it be valid for the last array element to end on 0xFFFFFFFF (assuming 32bit), such that end == 0? If not, how close can it be?

I think always using p != end (and only ever adding 1) or taking the length as len = end - begin then using that (e.g. (end - begin) >= 4) is always safe, but wondering if it is actually an issue to look out for, and audit and change existing code for.

Upvotes: 1

Views: 863

Answers (1)

Steve Jessop
Steve Jessop

Reputation: 279355

The standard doesn't talk about pointer overflow, it talks about what pointer values can legitimately be formed by pointer arithmetic. Simply put, the legitimate range is pointers into your object/array plus a one-past-the-end pointer.

Then, it is the responsibility of the C or C++ implementation not to create any objects in locations where some implementation-specific danger like pointer overflow prevents those legitimate pointer values from working correctly.

So neither malloc etc, nor the stack (presuming you haven't exceeded any stack bounds) will give you an array of char, starting at an address to which you cannot (due to overflow) add the size of the array.

how close can it be?

As close as allows all the required pointer values to work correctly. So on this 32-bit system, a 1-byte object starting at 0xFFFFFFFE would be the maximum possible address. The standard doesn't permit you to add 2 to the address, so it "doesn't matter" that doing so would overflow, so far as the implementation is concerned. For a 2-byte object the max would be 0xFFFFFFFD if the type is unaligned, but that's an odd number, so 0xFFFFFFFC if it requires 2-alignment.

Of course, other implementation details might dictate a lower limit. For example, it's not unusual for a system to reserve a page of memory either side of 0 and make it inaccessible. This helps catch errors where someone has accessed a null pointer with a small offset. Granted, this is more likely to happen with positive offsets than negative, but still. If your 32-bit system decided to do that, then malloc would need to take account of it, and would never return 0xFFFFFFFE.

Upvotes: 3

Related Questions