Steve
Steve

Reputation: 731

What does an overflow means exactly?

I have read two definitions for what an overflow means.

Say that we have the following addition:

 11111111
 00000001
 --------
100000000

The first definition I have read is that the fact that the result did not fit into 8 bits (it needed 9 bits in this example), then this is called an overflow.


The other definition I have read is that if we have an addition for two signed integers (two's complement integers for example):

 10011001
 10111011
 --------
101010100

Then if the 8 bits result (01010100) have a sign that is different from the sign for the two integers (and it is different in this example), then this is called an overflow.

Which definition is correct?

Upvotes: 1

Views: 996

Answers (2)

Aki Suihkonen
Aki Suihkonen

Reputation: 20037

Both cases would have been solved by adding one more bit to result. As the Wikipedia page for Integer Overflow states, integer overflow occurs, when the result of operation does not fit the target register/variable. E.g. 0x10000 * 0x10000 is also an overflow for 32-bit integers.

Typical processor architectures separate the cases of unsigned integer overflow vs signed integer overflow to two distinct flags in status register; Carry/Borrow for unsigned and Overflow for signed addition.

Upvotes: 6

Rob Anthony
Rob Anthony

Reputation: 1813

They are both correct. An overflow happens when the calculation on the number has resulted in an error due to the result no longer fitting in with the stored format of the number.

So in the first case, the overflow happens because you need 9 bits and only 8 are available (meaning the number now stored in the byte does not accurately reflect the answer).

In the second case, the overflow happens because the result interferes with the signed bit also resulting in the byte containing an incorrect value.

Upvotes: 5

Related Questions