Reputation: 13002
I was writing code that looked like the following...
if(denominator == 0){
return false;
}
int result = value / denominator;
... when I thought about branching behavior in the CPU.
https://stackoverflow.com/a/11227902/620863 This answer says that the CPU will try to correctly guess which way a branch will go, and head down that branch only to stop if it discovers it guessed the branch incorrectly.
But if the CPU predicts the branch above incorrectly, it would divide by zero in the following instructions. This doesn't happen though, and I was wondering why? Does the CPU actually execute a division by zero and wait to see if the branch is correct before doing anything, or can it tell that it shouldn't continue in these situations? What's going on?
Upvotes: 28
Views: 2517
Reputation: 179799
The CPU is free to do whatever it wants, when speculatively executing a branch based on a prediction. But it needs to do so in a way that's transparent to the user. So it may stage a "division by zero" fault, but this should be invisible if the branch prediction turns out wrong. By the same logic, it may stage writes to memory, but it may not actually commit them.
As a CPU designer, I wouldn't bother predicting past such a fault. That's probably not worth it. The fault probably means a bad prediction, and that will resolve itself soon enough.
This freedom is a good thing. Consider a simple std::accumulate
loop. The branch predictor will correctly predict a lot of jumps (for (auto current = begin, current != end; ++current)
which usually jumps back to the begin of loop), and there are a lot of memory reads which may potentially fault (sum += *current
). But a CPU that would refuse to read a memory value until the previous branch has been resolved would be a lot slower. And yet a mispredicted jump at the end of the loop might very well cause a harmless memory fault, as the predicted branch tries to read past the buffer. This needs to be resolved without a visible fault.
Upvotes: 26
Reputation: 148890
Not exactly. The system is not allowed to execute the instructions in the wrong branch even if it does a bad guess, or more exactly if it does it must not be visible. The basic is :
For the analogy with the referenced post, the train has to stop immediately at the junction if the switch was not in correct position, it cannot go to next station on the wrong path, or if it cannot stop before that, no passengers shall be allowed to go in or out of the train
(*) Itanium processors would be able to process many paths in parallel. Intel's logic was that they can build wide processors (which do a lot in parallel) but they were struggling with the effective instruction rate. By speculatively executing both branches, they used a lot of hardware (I think they could do it several levels deep, running 2^N branches) but it did help the apparent single core speed as it in effect always predicted the correct branch in one HW unit - Credits should go to MSalters for that precision
Upvotes: 6
Reputation:
Division by zero is nothing really special. It is a condition that is handled by the ALU to yield some effect, such as assigning a special value to the quotient. It can also raise an exception if this exception type has been enabled.
Compare to the snippet
if (denominator == 0) {
return false;
}
int result = value * denominator;
The multiply can be executed speculatively, then canceled without you knowing. Same for a division. No worries.
Upvotes: 0
Reputation: 6676
But if the CPU predicts the branch above incorrectly, it would divide by zero in the following instructions. This doesn't happen though, and I was wondering why?
It may well happen, however the question is: Is it observable? Obviously, this speculative division by zero does not and should not "crash" the CPU, but this does not even happen for a non-speculative division by zero. There is a long causal chain between the division by zero and your process exiting with an error message. It goes somewhat like this (on POSIX, x86):
This is a lot of work, compared to a simple, error-free division, and a lot of it could be executed speculatively. Basically anything until the actual mmap'ed I/O, or until the finite set of resources for speculative execution (e.g. shadow registers and temporary cache lines) are exhausted. The latter is likely to happen much, much sooner. In this case, the speculative branch needs to be suspended, until it is clear whether it is actually taken and the changes should be committed (once the changes are written, the speculative execution resources can then be released), or whether the changes should be discarded.
The important bit is: As long as none of the speculative execution state becomes visible to other threads, other speculative branches on the same thread, or other hardware (such as graphics), anything goes for optimization. However, realistically, MSalters is absolutely right that a CPU designer would not care to optimize for this use case. So it is equally my opinion, that a real CPU will probably just suspend the speculative branch once the error flag is set. This at most costs a few cycles if the error is even legitimate, and even that is unlikely because the pattern you described is common. Doing speculative execution past this point would only divert precious optimization resources from more important cases.
(In fact, the only processor exception I would want to make reasonably fast, were I a CPU designer, is a specific type of page fault, where the page is known and accessible, but the "present" flag is cleared, just because this happens commonly when virtual memory is used, and is not a true error. Even this case is not terribly important, though, because the disk access on swapping, or even just memory decompression, is typically much more expensive.)
Upvotes: 0