Reputation: 4848
I have a ringbuffer that stores head and tail index as an unsigned integer value. According to this source it is enough to only wrap at retrieval of the indices and just let the uint behaviour take care of the wrap around at uint_MAX. Is this true for all implementations or am I relying on undefined behaviour here?
#include <iostream>
#include <cstddef>
#include <limits>
int main()
{
size_t max = std::numeric_limits<size_t>::max();
size_t head = max - 5;
size_t tail = max + 5;
size_t len = tail - head;
std::cout << len << std::endl;
head = max - 5;
tail = head + 10;
len = tail - head;
std::cout << len << std::endl;
head += 20;
tail += 20;
len = tail - head;
std::cout << len << std::endl;
}
Output:
10
10
10
Upvotes: -2
Views: 82
Reputation: 12863
Arithmetic operations on unsigned integer types are defined by the language standard to "wrap around" from the maximum representable value back to 0. Quoting basic.fundamental p2:
arithmetic for the unsigned type is performed modulo 2^N.
(where N is the number of bits in the representation).
Consequently, the behavior shown in the example code, in which size_t
quantities are incremented past the maximum value, and subtracted when
one is near the maximum and another near zero, are all well-defined,
with the output shown being portable.
The question asks if it is "enough to only wrap at retrieval of the indices". The idea, as I understand it, is to do something like:
typedef int /*or whatever type*/ DataValue;
DataValue ringBuffer[BUFFER_SIZE];
size_t head;
size_t tail;
// Add an element to the ring buffer.
void addElement(DataValue value)
{
// Increment head without explicit bound, allowing it to wrap when it
// reaches the maximum value.
++head;
// Limit the index to the array size using `%` only at the point it is
// used as an index.
ringBuffer[head % BUFFER_SIZE] = value;
}
Whether this is safe depends on BUFFER_SIZE
.
If BUFFER_SIZE
is always a power of 2, then this is safe.
But if BUFFER_SIZE
is not a power of 2, then this is not safe,
because when head
finally wraps at its maximum, the effective array
index will not wrap properly. For example, if BUFFER_SIZE
is 10, and
size_t
is a 32-bit quantity with maximum value 4294967295, then near
its limit, we will have:
head head % BUFFER_SIZE
---------- ------------------
4294967293 3
4294967294 4
4294967295 5
0 0 <--- oops, should have been 6!
1 1
BUFFER_SIZE
is a power of 2?No! Even if it is arithmetically safe, letting the index get out of
range and then pulling it back with %
is not a good software
engineering choice. Instead, you should do something straightforward
like:
if (++head == BUFFER_SIZE) {
head = 0;
}
Here are a few reasons to prefer the simple increment-and-check approach over an unbounded increment:
There is no realistic scenario in which the unbounded increment code
runs measurably faster. Yes, it avoids one branch, but you need to
check against tail
anyway, it could easily be slower depending on
exactly what %
turns into, and moreover this level of optimization
is exceedingly unlikely to be relevant.
It only works for particular buffer sizes, which is at best a future maintenance hazard.
It makes debugging needlessly difficult due to head
not being
directly comparable to the array size.
There is always a lurking corner case when head
in fact reaches its
maximum value. Even with a 32-bit counter, you'll have to work hard
in testing to reach the limit in order to verify that it works
correctly, and with a 64-bit counter, overflowing it during testing
(if incrementing by 1 and starting at 0) is infeasible (it takes
months of computation on the fastest existing hardware). You never
want to have untested code paths that could be hit in production if
the code is run for long enough.
Upvotes: 1