Reputation: 91
I am learning about CPU architecture and it is bit confusing.
Is it correct that old microprogrammed CISC CPUs would translate ISA instruction into series of simple (1 cycle) microinstructions?(and that by RISC philosophy ISA instruction basically is same as microinstruction and takes 1 cycle)
According to Wiki:
However, modern x86 processors also (typically) decode and split instructions into dynamic sequences of internally buffered micro-operations...
How is different to old model?
BTW. Is there difference between microinstruction and micro-operation, or those are synonyms?
Upvotes: 3
Views: 732
Reputation: 214
As Timothy mentioned, today's CISC machines still convert CISC instructions into micro-ops except that it's being done in a more efficient way to allow for more aggressive out-of-order execution (faster clock speeds ....etc).
Now, what's the difference between all the terms that typically get thrown during such conversations about instructions, micro-ops, micro-instructions, macro-ops, macro/micro-fusion, ....etc? Well, it depends alot on who's talking:
In the AMD world, a macro-op is the equivalent of a micro-op (or uop) in the Intel world. Both refer to one of the micro-operations that an X86 instruction breaks or is decoded into. Both are RISC-like instructions with most likely fixed lengths. Fusion refers to when two operations (typically dependent on each other) are fused together into one operation (to save decode width primarily). If this is done with actual x86 instructions, it's called Macro-Fusion, if it's done with micro-ops, it's called Micro-Fusion. Finally, I think a microinstruction may be used in a context to refer to one instruction of a micro-code in the old days or these days I don't see why it wouldn't be used to refer to a micro-op (or macro-op). However, I personally have not heard it a lot; I typically hear the term micro-op or macro-op more often.
Well, I hope that helps
Upvotes: 0
Reputation: 1599
Old CISC processors didn't translate. They executed the ISA directly or via microcode. All CPUs perform some instruction decode (even if it's fairly trivial in some RISC architectures). In the past, decode would convert a machine instruction 1-to-1 to an internal micro-op. Or the machine language instruction would start a microcode sequence running, which performed all of the steps.
Today, to get good performance from a CISC ISA, the instructions that do multiple things (e.g. combining a memory reference with an ALU operation) have to be "cracked" into multiple internal micro-ops that each perform simpler operations. This isn't conceptually a lot unlike using microcode, except that it's pipelined in dedicated hardware, and the micro-ops are tracked independently and out of order with respect to other instructions. Also, you can think of this cracking as just a more elaborate instruction decode.
Upvotes: 2
Reputation: 16129
In the very old day.
is typically contained in a fairly wide control store; it is not uncommon for each word to be 108 bits or more. On each tick of a sequencer clock a microcode word is read, decoded, and used to control the functional elements that make up the CPU.
So for some architectures its true.
The newer microcode is typically used in super-scalar processors so the micro code uses the issue width to run more of its code in parallel using the out-of-order issuing of its instructions, the reorder buffer then makes sure the micro code is retired in order. The old machines did everything in order, some with a bit of pipelining.
I would think microinstruction starts micro-operations.
Upvotes: 2