Reputation: 12998
In this question the topic is how to make VS check for an arithmetic overflow in C# and throw an Exception: C# Overflow not Working? How to enable Overflow Checking?
One of the comments stated something weird and got upvoted much, I hope you can help me out here:
You can also use the checked keyword to wrap a statement or a set of statements so that they are explicitly checked for arithmetic overflow. Setting the project-wide property is a little risky because oftentimes overflow is a fairly reasonable expectation.
I dont know much about hardware but am aware that overflow has to do with the way registers work. I always thought overflow causes undefined behaviour and should be prevented where possible. (in 'normal' projects, not writing malicious code)
Why would you ever expect an overflow to happen and why wouldn't you always prevent it if you have the possibility? (by setting the corresponding compiler option)
Upvotes: 25
Views: 3178
Reputation: 65599
Angles
Integers that overflow are elegant tools for measuring angles. You have 0 == 0 degrees, 0xFFFFFFFF == 359.999.... degrees. Its very convenient, because as 32 bit integers you can add/subtract angles (350 degrees plus 20 degrees ends up overflowing wrapping back around to 10 degrees). Also you can decide to treat the 32 bit integer as signed (-180 to 180 degrees) and unsigned (0 to 360). 0xFFFFFFF equates to -179.999..., which equates to 359.999..., which is equivelent. Very elegent.
Upvotes: 6
Reputation: 1
All integer arithmetic (well adds subtracts and multiplies at least) is exact. It is just the interpretation of the resulting bits that you need to be careful of. In the 2's complement system, you get the correct result modulo 2 to the number of bits. The only difference between signed, and unsigned is that for signed numbers the most significant bit is treated as a sign bit. Its up to the programmer to determine what is appropriate. Obviously for some computations you want to know about an overflow and take appropriate action if one is detected. Personally I've never needed the overflow detection. I use a linear congruential random number generator that relies on it, i.e. 64*64bit unsigned integer multiplication, I only care about the lowest 64bits, I get the modulo operation for free because of the truncation.
Upvotes: 0
Reputation: 941277
When doing calculations with constantly incrementing counters. A classic example is Environment.TickCount:
int start = Environment.TickCount;
DoSomething();
int end = Environment.TickCount;
int executionTime = end - start;
If that were checked, the program has odds to bomb 27 days after Windows was booted. When TickCount ticks beyond int.MaxValue while DoSomething was running. PerformanceCounter is another example.
These types of calculations produce an accurate result, even though overflow is present. A secondary example is the kind of math you do to generate a representative bit pattern, you're not really interested in an accurate result, just a reproducible one. Examples of those are checksums, hashes and random numbers.
Upvotes: 9
Reputation: 1035
An integer overflow goes like this.
You have an 8 bit integer 1111 1111, now add 1 to it. 0000 0000, the leading 1 gets truncated since it would be in the 9th position.
Now say you have a signed integer, the leading bit means it's negative. So now you have 0111 1111. Add 1 to it and you have 1000 0000, which is -128. In this case, adding 1 to 127 made it switch to negative.
I'm very sure overflows behave in a well determined manner, but I'm not sure about underflows.
Upvotes: 0
Reputation: 538
You might expect it on something that is measured for deltas. Some networking equipment keeps counter sizes small and you can poll for a value, say bytes transferred. If the value gets too big it just overflows back to zero. It still gives you something useful if you're measuring it frequently (bytes/minute, bytes/hour), and as the counters are usually cleared when a connection drops it doesn't matter they are not entirely accurate.
As Justin mentioned buffer overflows are a different kettle of fish. This is where you write past the end of an array into memory that you shouldn't. In numeric overflow, the same amount of memory is used. In buffer overflow you use memory you didn't allocate. Buffer overflow is prevented automatically in some languages.
Upvotes: 2
Reputation: 185842
This probably has as much to do with history as with any technical reasons. Integer overflow has very often been used to good effect by algorithms that rely on the behaviour (hashing algorithms in particular).
Also, most CPUs are designed to allow overflow, but set a carry bit in the process, which makes it easier to implement addition over longer-than-natural word-sizes. To implement checked operations in this context would mean adding code to raise an exception if the carry flag is set. Not a huge imposition, but one that the compiler writers probably didn't want to foist upon people without choice.
The alternative would be to check by default, but offer an unchecked option. Why this isn't so probably also goes back to history.
Upvotes: 3
Reputation: 837996
why wouldn't you always prevent it if you have the possibility?
The reason checked arithmetic is not enabled by default is that checked arithmetic is slower than unchecked arithmetic. If performance isn't an issue for you it would probably make sense to enable checked arithmetic as an overflow occurring is usually an error.
Upvotes: 3
Reputation: 6711
I always thought overflow causes undefined behaviour and should be prevented where possible.
You may also be confused about the difference between buffer overflow (overrun) and numeric overflow.
Buffer overflow is when data is written past the end of an unmanaged array. It can cause undefined behavior, doing things like overwriting the return address on the stack with user-entered data. Buffer overflow is difficult to do in managed code.
Numeric overflow, however, is well defined. For example, if you have an 8-bit register, it can only store 2^8 values (0 to 255 if unsigned). So if you add 100+200, you would not get 300, but 300 modulo 256, which is 44. The story is a little more complicated using signed types; the bit pattern is incremented in a similar manner, but they are interpreted as two's complement, so adding two positive numbers can give a negative number.
Upvotes: 10
Reputation: 67195
This isn't so much related to how registers work as it is just the limits of the memory in variables that store data. (You can overflow a variable in memory without overflowing any registers.)
But to answer your question, consider the simplest type of checksum. It's simply the sum of all the data being checked. If the checksum overflows, that's okay and the part that didn't overflow is still meaningful.
Other reasons might include that you just want your program to keep running even though a inconsequential variable may have overflowed.
Upvotes: 1
Reputation: 1927
One more possible situation which I could imaging is a random number generation algorythm - we don't case about overflow in that case, because all we want is a random number.
Upvotes: 0
Reputation: 30111
When generating HashCodes, say from a string of characters.
Upvotes: 3
Reputation: 1499860
The main time when I want overflow is computing hash codes. There, the actual numeric magnitude of the result doesn't matter at all - it's effectively just a bit pattern which I happen to be manipulating with arithmetic operations.
We have checked arithmetic turned on project-wide for Noda Time - I'd rather throw an exception than return incorrect data. I suspect that it's pretty rare for overflows to be desirable... I'll admit I usually leave the default to unchecked arithmetic, just because it's the default. There's the speed penalty as well, of course...
Upvotes: 36
Reputation: 11680
There is one classic story about a programmer who took advantage of overflow in the design of a program:
Upvotes: 1