Reputation: 7489
To be clear, I am not looking for NaN or infinity, or asking what the answer to x/0
should be. What I'm looking for is this:
Based on how division is performed in hardware (I do not know how it is done), if division were to be performed with a divisor of 0, and the processor just chugged along happily through the operation, what would come out of it?
I realize this is highly dependent on the dividend, so for a concrete answer I ask this: What would a computer spit out if it followed its standard division operation on 42 / 0
?
Update:
I'll try to be a little clearer. I'm asking about the actual operations done with the numbers at the bit level to reach a solution. The result of the operation is just bits. NaN and errors/exceptions come into play when the divisor is discovered to be zero. If the division actually happened, what bits would come out?
Upvotes: 12
Views: 6158
Reputation: 382822
On x86, interrupt 0 occurs and the output registers are unchanged
Minimal 16-bit real mode example (to be added to a bootloader for example):
movw $handler, 0x00
movw %cs, 0x02
mov $0, %ax
div %ax
/* After iret, we come here. */
hlt
handler:
/* After div, we come here. *
iret
How to run this code in detail || 32-bit version.
The Intel documentation for the DIV instruction does not say that the regular output registers (ax
== result, dx
== module) are modified, so I think this implies they stay unchanged.
Linux would then handle that interrupt to send a SIGFPE to the process that did that, which is will kill it if not handled.
Upvotes: 1
Reputation: 1
X / 0 Where X is an element of realnumbers and is greater than or equal to 1, therefor the answer of X / 0 = infinity.
Division method (c#)
Int Counter = 0; /* used to keep track of the division */
Int X = 42; /* number */
Int Y = 0; /* divisor */
While (x > 0) {
X = X - Y;
Counter++;
}
Int answer = Counter;
Upvotes: -1
Reputation: 54981
It might just not halt. Integer division can be carried out in linear time through repeated subtraction: for 7/2, you can subtract 2 from 7 a total of 3 times, so that’s the quotient, and the remainder (modulus) is 1. If you were to supply a dividend of 0 to an algorithm like that, unless there were a mechanism in place to prevent it, the algorithm would not halt: you can subtract 0 from 42 an infinite number of times without ever getting anywhere.
From a type perspective, this should be intuitive. The result of an undefined computation or a non-halting one is ⊥ (“bottom”), the undefined value inhabiting every type. Division by zero is not defined on the integers, so it should rightfully produce ⊥ by raising an error or failing to terminate. The former is probably preferable. ;)
Other, more efficient (logarithmic time) division algorithms rely on series that converge to the quotient; for a dividend of 0, as far as I can tell, these will either fail to converge (i.e., fail to terminate) or produce 0. See Division on Wikipedia.
Floating-point division similarly needs a special case: to divide two floats, subtract their exponents and integer-divide their significands. Same underlying algorithm, same problem. That’s why there are representations in IEEE-754 for positive and negative infinity, as well as signed zero and NaN (for 0/0).
Upvotes: 13
Reputation: 272497
Hardware dividers typically use a pipelined long division structure.
Assuming we're talking about integer division for now (as opposed to floating-point); the first step in long division is to align the most-significant ones (before attempting to subtract the divisor from the dividend). Clearly, this is undefined in the case of 0, so who knows what the hardware would do. If we assume it does something sane, the next step is to perform log(n) subtractions (where n is the number of bit positions). For every subtraction that results in a positive result, a 1 is set in the output word. So the output from this step would be an all-1s word.
Floating-point division requires three steps:
0 is represented by all-0s (both the mantissa and the exponent). However, there's always an implied leading 1 in the mantissa, so if we weren't treating this representation as a special case, it would just look and act like an extremely small power of 2.
Upvotes: 5
Reputation: 5786
It would actually spit out an exception. Mathematically, 42 / 0, is undefined, so computers won't spit out a specific value to these inputs. I know that division can be done in hardware, but well designed hardware will have some sort of flag or interrupt to tell you that whatever value contained in the registers that are supposed to contain the result is not valid. Many computers make an exception out of this.
Upvotes: 1
Reputation: 118631
It would be an infinite loop. Typically, division is done through continuous subtraction, just like multiplication is done via continual addition.
So, zero is special cased since we all know what the answer is anyway.
Upvotes: 1
Reputation: 4998
It depends on the implementation. IEE standard 754 floating point[1] defines signed infinity values, so in theory that should be the result of divide by zero. The hardware simply sets a flag if the demoninator is zero on a division operation. There is no magic to it.
Some erroneous (read x86) architectures throw a trap if they hit a divide by zero which is in theory, from a mathematical point of view, a cop out.
[1] http://en.wikipedia.org/wiki/IEEE_754-2008
Upvotes: 2
Reputation: 993085
For processors that have an internal "divide" instruction, such as the x86 with div
, the CPU actually causes a software interrupt if one attempts to divide by zero. This software interrupt is usually caught by the language runtime and translated into an appropriate "divide by zero" exception.
Upvotes: 8